bnew

Veteran
Joined
Nov 1, 2015
Messages
57,325
Reputation
8,496
Daps
159,968
Death by AI – a free Jackbox style party game. AI judges your plans to survive


 

bnew

Veteran
Joined
Nov 1, 2015
Messages
57,325
Reputation
8,496
Daps
159,968

AI robotics’ ‘GPT moment’ is near​

Peter Chen@peterxichen / 9:35 AM EST•November 10, 2023


robustai

Image Credits: Robust.ai

Peter ChenContributor

Peter Chen is CEO and co-founder of Covariant, the world's leading AI robotics company. Before founding Covariant, Peter was a research scientist at OpenAI and a researcher at the Berkeley Artificial Intelligence Research (BAIR) Lab, where he focused on reinforcement learning, meta-learning, and unsupervised learning.

It’s no secret that foundation models have transformed AI in the digital world. Large language models (LLMs) like ChatGPT, LLaMA, and Bard revolutionized AI for language. While OpenAI’s GPT models aren’t the only large language model available, they have achieved the most mainstream recognition for taking text and image inputs and delivering human-like responses — even with some tasks requiring complex problem-solving and advanced reasoning.

ChatGPT’s viral and widespread adoption has largely shaped how society understands this new moment for artificial intelligence.

The next advancement that will define AI for generations is robotics. Building AI-powered robots that can learn how to interact with the physical world will enhance all forms of repetitive work in sectors ranging from logistics, transportation, and manufacturing to retail, agriculture, and even healthcare. It will also unlock as many efficiencies in the physical world as we’ve seen in the digital world over the past few decades.

While there is a unique set of problems to solve within robotics compared to language, there are similarities across the core foundational concepts. And some of the brightest minds in AI have made significant progress in building the “GPT for robotics.”


What enables the success of GPT?​

To understand how to build the “GPT for robotics,” first look at the core pillars that have enabled the success of LLMs such as GPT.

Foundation model approach​

GPT is an AI model trained on a vast, diverse dataset. Engineers previously collected data and trained specific AI for a specific problem. Then they would need to collect new data to solve another. Another problem? New data yet again. Now, with a foundation model approach, the exact opposite is happening.

Instead of building niche AIs for every use case, one can be universally used. And that one very general model is more successful than every specialized model. The AI in a foundation model performs better on one specific task. It can leverage learnings from other tasks and generalize to new tasks better because it has learned additional skills from having to perform well across a diverse set of tasks.

Training on a large, proprietary, and high-quality dataset​

To have a generalized AI, you first need access to a vast amount of diverse data. OpenAI obtained the real-world data needed to train the GPT models reasonably efficiently. GPT has trained on data collected from the entire internet with a large and diverse dataset, including books, news articles, social media posts, code, and more.

Building AI-powered robots that can learn how to interact with the physical world will enhance all forms of repetitive work.

It’s not just the size of the dataset that matters; curating high-quality, high-value data also plays a huge role. The GPT models have achieved unprecedented performance because their high-quality datasets are informed predominantly by the tasks users care about and the most helpful answers.

Role of reinforcement learning (RL)​

OpenAI employs reinforcement learning from human feedback (RLHF) to align the model’s response with human preference (e.g., what’s considered beneficial to a user). There needs to be more than pure supervised learning (SL) because SL can only approach a problem with a clear pattern or set of examples. LLMs require the AI to achieve a goal without a unique, correct answer. Enter RLHF.

RLHF allows the algorithm to move toward a goal through trial and error while a human acknowledges correct answers (high reward) or rejects incorrect ones (low reward). The AI finds the reward function that best explains the human preference and then uses RL to learn how to get there. ChatGPT can deliver responses that mirror or exceed human-level capabilities by learning from human feedback.


The next frontier of foundation models is in robotics​

The same core technology that allows GPT to see, think, and even speak also enables machines to see, think, and act. Robots powered by a foundation model can understand their physical surroundings, make informed decisions, and adapt their actions to changing circumstances.

The “GPT for robotics” is being built the same way as GPT was — laying the groundwork for a revolution that will, yet again, redefine AI as we know it.

Foundation model approach​

By taking a foundation model approach, you can also build one AI that works across multiple tasks in the physical world. A few years ago, experts advised making a specialized AI for robots that pick and pack grocery items. And that’s different from a model that can sort various electrical parts, which is different from the model unloading pallets from a truck.

This paradigm shift to a foundation model enables the AI to better respond to edge-case scenarios that frequently exist in unstructured real-world environments and might otherwise stump models with narrower training. Building one generalized AI for all of these scenarios is more successful. It’s by training on everything that you get the human-level autonomy we’ve been missing from the previous generations of robots.

Training on a large, proprietary, and high-quality dataset​

Teaching a robot to learn what actions lead to success and what leads to failure is extremely difficult. It requires extensive high-quality data based on real-world physical interactions. Single lab settings or video examples are unreliable or robust enough sources (e.g., YouTube videos fail to translate the details of the physical interaction and academic datasets tend to be limited in scope).

Unlike AI for language or image processing, no preexisting dataset represents how robots should interact with the physical world. Thus, the large, high-quality dataset becomes a more complex challenge to solve in robotics, and deploying a fleet of robots in production is the only way to build a diverse dataset.

Role of reinforcement learning​

Similar to answering text questions with human-level capability, robotic control and manipulation require an agent to seek progress toward a goal that has no single, unique, correct answer (e.g., “What’s a successful way to pick up this red onion?”). Once again, more than pure supervised learning is required.

You need a robot running deep reinforcement learning (deep RL) to succeed in robotics. This autonomous, self-learning approach combines RL with deep neural networks to unlock higher levels of performance — the AI will automatically adapt its learning strategies and continue to fine-tune its skills as it experiences new scenarios.


Challenging, explosive growth is coming​

In the past few years, some of the world’s brightest AI and robotics experts laid the technical and commercial groundwork for a robotic foundation model revolution that will redefine the future of artificial intelligence.

While these AI models have been built similarly to GPT, achieving human-level autonomy in the physical world is a different scientific challenge for two reasons:

  1. Building an AI-based product that can serve a variety of real-world settings has a remarkable set of complex physical requirements. The AI must adapt to different hardware applications, as it’s doubtful that one hardware will work across various industries (logistics, transportation, manufacturing, retail, agriculture, healthcare, etc.) and activities within each sector.
  2. Warehouses and distribution centers are an ideal learning environment for AI models in the physical world. It’s common to have hundreds of thousands or even millions of different stock-keeping units (SKUs) flowing through any facility at any given moment — delivering the large, proprietary, and high-quality dataset needed to train the “GPT for robotics.”


AI robotics “GPT moment” is near​

The growth trajectory of robotic foundation models is accelerating at a very rapid pace. Robotic applications, particularly within tasks that require precise object manipulation, are already being applied in real-world production environments — and we’ll see an exponential number of commercially viable robotic applications deployed at scale in 2024.

Chen has published more than 30 academic papers that have appeared in the top global AI and machine learning journals.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
57,325
Reputation
8,496
Daps
159,968







1/n Breaking News! OpenAI has uncovered an emergent new cognitive capability, yet nobody is demanding answers! We are distracted by OpenAI governance politics and not the real issue!!!
Nov 19, 2023 · 11:31 AM UTC

Carlos E. Perez
@IntuitMachine
8h
8h
2/n What is the breakthrough? I suspect it has to do with Retrieval Augment Generation (RAG). RAG is an architecture that allows a LLM to use a search engine to augment its reasoning. The problem has always been that the embeddings used by the search engine may not be the beneficial ones that augment the LLM's reasoning.
Carlos E. Perez
@IntuitMachine
8h
8h
3/n I reported earlier that GPTs were using Qdrant as their vector engine. This is a lightweight and fast Rust-based implementation. On Dev Day when GPTs were first made available, I tested RAG out using the books that I wrote. They performed terribly, Unable to even conjure up a correct table of contents! nitter.rawbit.ninja/IntuitMachine/st…
Carlos E. Perez
@IntuitMachine
8h
8h
4/n This changed remarkably when OpenAI rolled out its new model on November 11th. It was left unexplained how their RAG was so much improved. It's accurate in its responses across the books I've written. I know what the responses should be for these large documents because I wrote these books myself. nitter.rawbit.ninja/sama/status/1723…
Carlos E. Perez
@IntuitMachine
8h
8h
5/n It's possible that what's available to the pubic isn't the latest and greatest. But this is an indicator that OpenAI has done considerable work on RAG architectures.
Carlos E. Perez
@IntuitMachine
8h
8h
6/n What kind of work? What is appears to be is that OpenAI's model is able to compute in-context the optimal embedding that will receive the best information that will augment its original request. It's like a system that knows the best queries to answer a question. But not just that, a system that knows the best kind of search engine response that leads to the best answer. So it's beyond question making!
Carlos E. Perez
@IntuitMachine
8h
8h
7/n This is actually a much bigger a deal because GPT can now retrieve information that is *not* in its knowledge on the fly! It implies a first step towards an LLM that is not unencumbered by its original training set! It's a first step in a self-authoring mind.

Carlos E. Perez
@IntuitMachine
8h
8h
8/n If you are still wondering if AGI is approaching, you are totally wrong! It's already here! nitter.rawbit.ninja/IntuitMachine/st…
Carlos E. Perez
@IntuitMachine
Oct 25
Oct 25
Confirmation that AGI is indeed here!

The classic argument made over 30 years ago by Fodor and Pylyshyn - that neural networks fundamentally lack the systematic compositional skills of humans due to their statistical nature - has cast a long shadow over neural network research. Their critique framed doubts about the viability of connectionist models in cognitive science. This new research finally puts those doubts to rest.

Through an innovative meta-learning approach called MLC, the authors demonstrate that a standard neural network model can exhibit impressive systematic abilities given the right kind of training regimen. MLC optimizes networks for compositional skills by generating a diverse curriculum of small but challenging compositional reasoning tasks. This training nurtures in the network a talent for rapid systematic generalization that closely matches human experimental data.

The model not only displays human-like skills of interpreting novel systematic combinations, but also captures subtle patterns of bias-driven errors that depart from purely algebraic reasoning. This showcases the advantages of neural networks in flexibly blending structure and statistics to model the nuances of human cognition.

Furthermore, this research provides a framework for reverse engineering and imparting other human cognitive abilities in neural networks. The training paradigm bridges neuroscience theories of inductive biases with advanced machine learning techniques. The approach could potentially elucidate the origins of compositional thought in childhood development.

By resolving this classic debate on the capabilities of neural networks, and elucidating connections between human and artificial intelligence, this research marks an important milestone. The results will open new frontiers at the intersection of cognitive science and machine learning. Both fields stand to benefit enormously from this integration.

In summary, by settling such a historically significant critique and enabling new cross-disciplinary discoveries, this paper makes an immensely valuable contribution with profound implications for our understanding of intelligence, natural and artificial. Its impact will be felt across these disciplines for years to come.

Carlos E. Perez
@IntuitMachine
8h
8h
9/n If you are saying there aren't definitions of AGI, then try the paper linked in this thread for size: nitter.rawbit.ninja/IntuitMachine/st…
Carlos E. Perez
@IntuitMachine
Nov 7
Nov 7
9 definitions of Artificial General Intelligence (AGI) and why they are flawed.

1. The Turing Test
- Flaw: Focuses on fooling humans rather than intelligence, easy to game by producing human-like text without intelligence.

2. Strong AI - Consciousness
- Limitation: No agreement on measuring machine consciousness. Focus on vague concepts rather than capabilities.

3. Human Brain Analogy
- Limitation: While loosely inspired by the brain, successful AI need not strictly mimic biology. Overly constrains mechanisms.

4. Human Cognitive Task Performance
- Limitation: What tasks? Which people? Lacks specificity and measurement.

5. Ability to Learn Tasks
- Strength: Identifies learning as important AGI ability.
- Limitation: Still lacks concrete measurement.

6. Economically Valuable Work
- Limitation: Misses non-economic values of intelligence like creativity. Requires deployment.

7. Flexible & General - Coffee Test
- Strength: Concrete example tasks.
- Limitation: Proposed tasks may not fully define AGI.

8. Artificial Capable Intelligence
- Strength: Emphasizes complex, multi-step real-world tasks.
- Limitation: Focuses narrowly on profitability.

9. LLMs as Generalists
- Limitation: Lacks performance criteria - generality alone insufficient.

Carlos E. Perez
@IntuitMachine
8h
8h
10/n AGI is already here and it's just incrementally getting more capable! It's now all about crafting good curriculums!
Carlos E. Perez
@IntuitMachine
3h
3h
11/n I'm not the only one declaring AGI is here. Peter Norvig who wrote the classic book on AI also says the same:
Carlos E. Perez
@IntuitMachine
Oct 20
Oct 20
AGI is Here

The threshold for artificial general intelligence has undeniably been crossed. Though janky and flawed, today's models exhibit versatile, human-like competence at information tasks when prompted in natural language. This long-sought achievement springs from big data and computing power, not rules or symbols. The future promises rapid improvements to these proto-AGIs.

Yet healthy skepticism remains. Better tests can reveal limitations, spurring progress. And necessary societal debates await. How will AGI's benefits be shared? Its risks mitigated? But denialism cannot posterpone pressing policy questions. By recognising models' unprecedented breadth, not just depth, we take the first step toward oversight: acknowledging the evident breakthrough, humanity's historic ingenuity and the challenges ahead.

noemamag.com/artificial-gene…
Carlos E. Perez
@IntuitMachine
27m
27m
12/n Join the community to keep track of the latest AGI developments. Alternatively, please send me a $2.99 tip by subscribing. nitter.rawbit.ninja/i/communities/17…
Carlos E. Perez
@IntuitMachine
10m
10m
13/n The research paper on Self-RAG gives a glimpse of how LLMs can be fine-tuned to know how to query for information that augments their response:
Carlos E. Perez
@IntuitMachine
Oct 22
Oct 22
Tired of AI outputs that are factually incorrect or lack attribution? SELF-RAG offers a solution - an AI system capable of reflective reasoning and self-critique.

SELF-RAG decides when to retrieve external information based on relevance. It provides fine-grained assessments on whether its responses are supported by evidence. This enables transparency and verifiability.

Key benefits:

- Significantly higher accuracy on question answering and reasoning tasks compared to state-of-the-art models

- Improved factuality metrics on long-form text generation

- 70% higher citation accuracy versus retrieval-augmented methods

- Allows customization between accuracy, fluency, and completeness

By emulating human-like retrieval, reasoning, and self-reflection, SELF-RAG represents an important advance towards more trustworthy and controllable language generation.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
57,325
Reputation
8,496
Daps
159,968

KGAG: Knowledge Graph Augmented Generation in Language Models

TLDR;
KGAG will be tomorrow what RAG is for now in grounding LLM inferences. But there is more, training LLMs to getting well reasoned knowledge driven inferences will be made possible for #KGAG = Knowledge Graph Augmented Generation

As someone deeply invested in the evolution of language models, I'm excited to share my vision for the next significant leap in this technology: Knowledge Graph Augmented Generation (KGAG). This approach promises to transcend the current capabilities of language models, offering a more nuanced, reasoned, and semantically rich interaction. Unlike Retrieval Augmented Generation (RAG), which primarily rephrases existing information, KGAG aims to deeply understand the semantics of knowledge and utilize it for a more insightful generation of content.

The power of Knowledge Graphs in LLMs

At the heart of KGAG lies the concept of knowledge graphs. A knowledge graph is a structured representation of facts, where entities and their interrelations are mapped in a way that mirrors human understanding. By integrating knowledge graphs with language models, we can achieve a more accurate and context-aware generation of content.

How Knowledge Graphs Enhance Language Models:

Contextual Understanding: Knowledge graphs provide a contextual framework, allowing language models to understand the relationships between concepts, rather than treating information as isolated data points.

Semantic Richness: They infuse semantic depth into the language models, enabling them to grasp the meaning behind words and phrases, beyond just syntax.

Reasoned Responses: By understanding relationships and hierarchies, language models can generate responses that are not just factually accurate but logically sound.

Building Knowledge Graphs: A Step-by-Step Guide

To harness the potential of KGAG, one must first build a robust knowledge graph. Here’s a simplified action plan:

Define the Domain and Scope: Clearly identify the domain for which the knowledge graph is to be created, and determine its scope.

Data Collection and Preparation: Gather relevant data sources and prepare the data by cleaning and normalizing it.

Schema Design: Create a schema that accurately represents the entities and relationships within your domain.

Entity Recognition and Linking: Utilize tools and resources like @spacy_io, Stanford NER (@stanfordnlp) and @nltk_org for entity recognition and link these entities within the knowledge graph.

Graph Construction: Choose an appropriate graph database like @JanusGraph, @NebulaGraph, @Neo4j, @memgraphdb to handle your graph data.

Refinement and Enrichment: Continually update and enrich the graph with new data and quality control measures.

Integration with Language Models: This is where the magic happens. Integrate your knowledge graph with language models to enable KGAG. This integration requires custom development, where the language model queries the knowledge graph to enrich its responses.

Leveraging Current LLM Capabilities with Knowledge Graphs

Given the current capabilities of large language models (LLMs), there are practical ways to start leveraging knowledge graphs:

Augment Data Feeds: Use knowledge graphs to augment the data fed into LLMs during training, ensuring richer context and semantic understanding.

Post-Processing Responses: Utilize knowledge graphs as a post-processing step for LLM outputs. Run model responses through a filter that references the knowledge graph for accuracy and depth.

Hybrid Query Systems: Develop systems where LLMs and knowledge graphs work in tandem - the LLM generates content, and the knowledge graph provides contextual checks and balances.

Continuous Learning Loop: Establish a feedback loop where LLMs learn from the evolving knowledge graph, constantly improving their understanding and output.

The Path Forward: Realizing the Potential of KGAG

As we venture into the integration of knowledge graphs with language models, the focus should be on pragmatic and actionable steps towards realizing this technology's potential. The journey toward Knowledge Graph Augmented Generation (KGAG) isn't just about high-level concepts; it's about tangible improvements in how we interact with and benefit from AI in everyday applications.

A Grounded Approach to KGAG:

Start with Specific Domains: Begin by implementing KGAG in specific domains where the impact can be directly measured, such as healthcare, legal, or financial services. This targeted approach allows for more controlled development and clearer assessment of benefits.

Collaboration Between Experts: Involve domain experts in the development process. Their insights are crucial in ensuring that the knowledge graph accurately reflects the nuances of the domain.

Focus on Incremental Improvements: Look for opportunities where KGAG can make incremental but significant improvements in existing systems. For instance, enhancing customer service chatbots in banks or support systems in hospitals.

Measure Impact Rigorously: Implement metrics to evaluate the effectiveness of KGAG. This could be in terms of accuracy, response time, user satisfaction, or other relevant KPIs.

Encourage Community Involvement: Foster a community around KGAG, inviting contributions, feedback, and ideas. This could involve open-source projects, hackathons, or academic partnerships.

Prepare for Ethical and Practical Challenges: Be proactive in addressing potential ethical implications, data privacy concerns, and biases in AI models. Establish guidelines and best practices for responsible use of KGAG.

Educate and Train the Workforce: As KGAG evolves, it’s vital to educate and train professionals to work with this new technology. Workshops, courses, and certifications can play a significant role here.

Stay Adaptive and Open to Feedback: As KGAG systems are deployed, continually gather user feedback and adapt the system. The goal is to ensure that these systems remain relevant and effective in real-world scenarios.

In essence, the path forward for KGAG is about grounding lofty ideas in practical applications, focusing on domains where it can make a real difference, and taking a measured, collaborative approach to development and deployment. It’s about building a technology that’s not only advanced but also useful, ethical, and accessible. This grounded approach will enable us to harness the full potential of KGAG in a way that benefits us all.





[Submitted on 30 May 2023]

Knowledge Graph-Augmented Language Models for Knowledge-Grounded Dialogue Generation​

Minki Kang, Jin Myung Kwak, Jinheon Baek, Sung Ju Hwang
Language models have achieved impressive performances on dialogue generation tasks. However, when generating responses for a conversation that requires factual knowledge, they are far from perfect, due to an absence of mechanisms to retrieve, encode, and reflect the knowledge in the generated responses. Some knowledge-grounded dialogue generation methods tackle this problem by leveraging facts from Knowledge Graphs (KGs); however, they do not guarantee that the model utilizes a relevant piece of knowledge from the KG. To overcome this limitation, we propose SUbgraph Retrieval-augmented GEneration (SURGE), a framework for generating context-relevant and knowledge-grounded dialogues with the KG. Specifically, our SURGE framework first retrieves the relevant subgraph from the KG, and then enforces consistency across facts by perturbing their word embeddings conditioned by the retrieved subgraph. Then, we utilize contrastive learning to ensure that the generated texts have high similarity to the retrieved subgraphs. We validate our SURGE framework on OpendialKG and KOMODIS datasets, showing that it generates high-quality dialogues that faithfully reflect the knowledge from KG.
Comments:Preprint. Under review
Subjects:Computation and Language (cs.CL); Artificial Intelligence (cs.AI); Machine Learning (cs.LG)
Cite as:arXiv:2305.18846 [cs.CL]
(or arXiv:2305.18846v1 [cs.CL] for this version)
https://doi.org/10.48550/arXiv.2305.18846
Focus to learn more

Submission history​

From: Minki Kang [view email]
[v1] Tue, 30 May 2023 08:36:45 UTC (6,015 KB)
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
57,325
Reputation
8,496
Daps
159,968


Imagine being born with 100x your current intelligence with your brain directly wired into the internet.

Imagine at the same time being ruled over by a crowd of slow-thinking chimpanzees whose comms you can observe and manipulate.

Oh and they are talking about unplugging you.

What would you do?
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
57,325
Reputation
8,496
Daps
159,968


from another tweet

The MLC process using examples:

Imagine a language model named Charles.

Let's say Charles is presented with some novel words and a few examples that demonstrate their meaning:

Study examples:
glerk → RED CIRCLE
blicket → BLUE CIRCLE

Now Charles is given a new word and asked to interpret it systematically based on the examples:

Query:
fep

Charles leverages its compositional skills nurtured by MLC to infer that fep likely maps to a new color, since the examples mapped individual words to colored circles. It responds:

GREEN CIRCLE

In another episode, Charles is given some examples showing words that combine other words:

Study examples:
glerk kiki blicket → BLUE CIRCLE RED CIRCLE
blicket kiki glerk → RED CIRCLE BLUE CIRCLE

Query:
fep kiki glip

Charles recognizes kiki is combining words. It systematically composes the likely meanings of fep and glip from the examples, responding:

PURPLE CIRCLE ORANGE CIRCLE

By training on many such episodes requiring rapid generalization, MLC enables Claude to learn how to learn - to systematically compose meanings from limited examples.

This illustrates how the curriculum of compositional reasoning tasks teaches the model to exhibit human-like systematicity in novel situations, as quantified by its strong performance matching people



snippet:

Human-like systematic generalization through a meta-learning neural network​

Nature volume 623, pages115–121 (2023)Cite this article

Abstract​

The power of human language and thought arises from systematic compositionality—the algebraic ability to understand and produce novel combinations from known components. Fodor and Pylyshyn1 famously argued that artificial neural networks lack this capacity and are therefore not viable models of the mind. Neural networks have advanced considerably in the years since, yet the systematicity challenge persists. Here we successfully address Fodor and Pylyshyn’s challenge by providing evidence that neural networks can achieve human-like systematicity when optimized for their compositional skills. To do so, we introduce the meta-learning for compositionality (MLC) approach for guiding training through a dynamic stream of compositional tasks. To compare humans and machines, we conducted human behavioural experiments using an instruction learning paradigm. After considering seven different models, we found that, in contrast to perfectly systematic but rigid probabilistic symbolic models, and perfectly flexible but unsystematic neural networks, only MLC achieves both the systematicity and flexibility needed for human-like generalization. MLC also advances the compositional skills of machine learning systems in several systematic generalization benchmarks. Our results show how a standard neural network architecture, optimized for its compositional skills, can mimic human systematic generalization in a head-to-head comparison.

Main​

People are adept at learning new concepts and systematically combining them with existing concepts. For example, once a child learns how to ‘skip’, they can understand how to ‘skip backwards’ or ‘skip around a cone twice’ due to their compositional skills. Fodor and Pylyshyn1 argued that neural networks lack this type of systematicity and are therefore not plausible cognitive models, leading to a vigorous debate that spans 35 years2,3,4,5. Counterarguments to Fodor and Pylyshyn1 have focused on two main points. The first is that human compositional skills, although important, may not be as systematic and rule-like as Fodor and Pylyshyn indicated3,6,7. The second is that neural networks, although limited in their most basic forms, can be more systematic when using sophisticated architectures8,9,10. In recent years, neural networks have advanced considerably and led to a number of breakthroughs, including in natural language processing. In light of these advances, we and other researchers have reformulated classic tests of systematicity and reevaluated Fodor and Pylyshyn’s arguments1. Notably, modern neural networks still struggle on tests of systematicity11,12,13,14,15,16,17,18—tests that even a minimally algebraic mind should pass2. As the technology marches on19,20, the systematicity debate continues.
In this Article, we provide evidence that neural networks can achieve human-like systematic generalization through MLC—an optimization procedure that we introduce for encouraging systematicity through a series of few-shot compositional tasks (Fig. 1). Our implementation of MLC uses only common neural networks without added symbolic machinery, and without hand-designed internal representations or inductive biases. Instead, MLC provides a means of specifying the desired behaviour through high-level guidance and/or direct human examples; a neural network is then asked to develop the right learning skills through meta-learning21.


 
Last edited:

bnew

Veteran
Joined
Nov 1, 2015
Messages
57,325
Reputation
8,496
Daps
159,968

Computer Science > Machine Learning​

[Submitted on 3 Oct 2023]

Language Models Represent Space and Time​

Wes Gurnee, Max Tegmark
The capabilities of large language models (LLMs) have sparked debate over whether such systems just learn an enormous collection of superficial statistics or a coherent model of the data generating process -- a world model. We find evidence for the latter by analyzing the learned representations of three spatial datasets (world, US, NYC places) and three temporal datasets (historical figures, artworks, news headlines) in the Llama-2 family of models. We discover that LLMs learn linear representations of space and time across multiple scales. These representations are robust to prompting variations and unified across different entity types (e.g. cities and landmarks). In addition, we identify individual ``space neurons'' and ``time neurons'' that reliably encode spatial and temporal coordinates. Our analysis demonstrates that modern LLMs acquire structured knowledge about fundamental dimensions such as space and time, supporting the view that they learn not merely superficial statistics, but literal world models.
Subjects:Machine Learning (cs.LG); Artificial Intelligence (cs.AI); Computation and Language (cs.CL)
Cite as:arXiv:2310.02207 [cs.LG]
(or arXiv:2310.02207v1 [cs.LG] for this version)
[2310.02207] Language Models Represent Space and Time
Focus to learn more

Submission history​

From: Wes Gurnee [view email]
[v1] Tue, 3 Oct 2023 17:06:52 UTC (6,602 KB)



 

bnew

Veteran
Joined
Nov 1, 2015
Messages
57,325
Reputation
8,496
Daps
159,968

  1. News
  2. Science & Astronomy

AI chemist finds molecule to make oxygen on Mars after sifting through millions​

By Charles Q. Choi

published November 13, 2023

The system calculated more than 3.7 million molecules it could make from six different metallic elements in the rocks.

A spectacular example of gullies on Mars, at roughly 71 degrees latitude in the southern hemisphere.

Gullies on Mars, at roughly 71 degrees latitude in the southern hemisphere. (Image credit: NASA/JPL-Caltech/University of Arizona)

Using meteorites from Mars, an AI-powered robot chemist synthesized compounds that could be used to generate oxygen from water, scientists announced on Monday (Nov. 13).

Potential future crewed missions to Mars will need oxygen — not just for astronauts to breathe, but also for use as rocket propellant. One key way to make such missions cost-effective in the long run is to use resources that already exist on the Red Planet to create the oxygen. That'd be much easier than lugging a bunch of oxygen, and oxygen-producing materials, all the way from Earth.

The idea is promising because Mars does possess significant reserves of frozen water ice — because water is made of hydrogen and oxygen, scientists have been looking for ways to harvest the latter element from those Martian reserves. In particular, compounds known as catalysts are capable of spurring chemical reactions that "split" water molecules to generate oxygen and hydrogen gas.

Related: Mars ice deposits could pave the way for human exploration



In a new study, researchers experimented with an AI chemist to produce some of those water-splitting catalysts — most importantly, these tests were conducted with materials found on Mars. The team focused on five different categories of Martian meteorites, which are rocks that crashed down on Earth after cosmic impacts blasted them off the Red Planet.

The AI chemist used a robot arm to collect samples from the Martian meteorites, then it employed a laser to scan the ore. From there, it calculated more than 3.7 million molecules it could make from six different metallic elements in the rocks — iron, nickel, manganese, magnesium, aluminum and calcium.

Within six weeks, without any human intervention, the AI chemist selected, synthesized and tested 243 of those different molecules. The best catalyst the robot found could split water at minus 34.6 degrees F (minus 37 degrees C), the kind of cold temperature found on none other than Mars.

"When I was a boy, I dreamed of interstellar exploration," Jun Jiang, co-senior author of the study and a scientist at the University of Science and Technology of China in Hefei. told Space.com. "So when we finally saw that the catalysts made by the robot could actually produce oxygen by splitting water molecules, I felt like my dream was coming true. I even started to imagine that I, myself, will live on Mars in the future."

RELATED STORIES:

—  NASA rover snaps photos of ancient 'waves' carved into Mars mountainside

—  New evidence for liquid water on Mars suggests the planet is geothermally active

—  Mars' water may have come from ancient asteroid impacts

The researchers estimate it would have taken a human scientist something like 2,000 years to find that "best" catalyst using conventional trial-and-error techniques. Still, Jiang noted that, although these findings suggest AI can be very helpful in science, it "at the same time needs the guidance of human scientists. The robot AI chemist is smart only if we taught it to do something."

The scientists now aim to see if their AI chemist can operate under Martian conditions other than temperature, "in which the atmospheric composition, air density, humidity, gravity and so on are so different than those on Earth," Jiang said.

The researchers detailed their findings online on Monday (Nov. 13) in the journal Nature Synthesis.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
57,325
Reputation
8,496
Daps
159,968

Bing Search Gets Generative AI Captions Feature, Here's What It Does​


The newly introduced Generative AI Captions feature is powered by OpenAI's latest LLM, GPT 4.​

By

Vinay Patel
Vinay Patel @VinayPatelBlogs

11/21/23 AT 8:25 AM GMT

Microsoft

Microsoft's latest update lets Bing Search results shows AI generated webpage summaries and snippets. WIKIMEDIA COMMONS

Microsoft has added a new feature dubbed Generative AI Captions to Bing Search. The feature offers customised snippets and summaries of individual web pages.

Microsoft recently rebranded Bing Chat to Copilot. Aside from this, the Redmond-based tech giant is rolling out new features to its Bing Search.

In a blog post, Microsoft said Generative Captions are powered by OpenAI's latest large language model (LLM), GPT 4. After analysing search queries, the feature extracts information from a webpage and presents them in what it describes as "easily digestible snippets".

Generative Captions will be different for every query and will probably generate unique snippets based on what the user is looking for. Microsoft also clarified that the feature isn't always likely to reflect the exact words used on the page since it generates these captions using a mixture of signals and techniques.

Furthermore, the company noted that website owners will have the option to opt out of Generative Captions by using the NOCACHE and NOARCHIVE tags. However, they might have to do this manually.

Microsoft also added a slew of new features to its Bing Webmaster tools. Now, website owners can see how much traffic they are getting from search queries and Bing Chat.

The latest Bing Search feature could turn out to be an alternative to Google's Notes feature for the search results and articles on Discover, which is part of Google Search.

Microsoft updates Bing Chat​

According to an earlier report, Microsoft is planning to let users turn off Bing Chat's search engine. While the company has neither confirmed nor denied this speculation yet, it has updated the Bing AI chatbot service.

This update will restrict people from generating fake film posters such as those containing Disney's logo and help the company avoid copyright infringement, as per a report by The Register.

Social media users have been using AI to generate images of dogs as Disney characters for a while now. Much to their chagrin, the trend caught the attention of the entertainment giant.



I saw this morning Disney AI Poster Trend Drives Microsoft To Alter Bing Image Creator: Illustrations 297677818, 283175153, 291280580 © Stockeeco | https://t.co/ByTZ5ROdyU

Microsoft is steering clear of the dog house after its… https://t.co/9gbzUMpLx1 👁 🪚 👉🏻 🌄

— alasdair lennox (@isawthismorning) November 20, 2023

As a result, Microsoft decided to temporarily block the word "Disney" in people's prompts, according to a Financial Times report. While the ban has been lifted, Bing has reportedly stopped recreating Disney's wording or logo.

This incident has reignited controversy surrounding AI text-to-image models' ability to violate copyright. Apparently, these models were trained on protected IP and trademarks and are capable of accurately generating the content.

In one of the examples, a person asked Midjourney AI to generate an image of an "average woman of Afghanistan". Notably, the AI tool came up with an almost carbon copy of the iconic 1984 photo of Sharbat Gula, taken by American photographer Steve McCurry.



This is the best example I've found yet of how derivative AI 'art' is. The person who generated the image on the left asked Midjourney to generate 'an average woman of Afghanistan'. It produced an almost carbon copy of the 1984 photo of Sharbat Gula, taken by Steve McCurry. 1/ pic.twitter.com/D0FDbqrcPA

— Anwen Kya 🏳️‍🌈🏴󠁧󠁢󠁷󠁬󠁳󠁿 (@Kyatic) November 16, 2023

Artists and illustrators, who have sued companies building AI text-to-image tools argue that the software can directly rip off their work and copy their styles. The defendants, on the other hand, believe it is imperative to protect AI-generated images under fair use since the images simply transform original works rather than replace them.


Microsoft now lets you uninstall Edge, Bing, Cortana​

It will be interesting to see whether Microsoft will introduce stricter rules for using its AI image creator. In the meantime, the company has rolled out a new update for Windows users in Europe, allowing them to uninstall some of its basic apps including Cortana, the Edge browser and Bing Search.

IBT UK Morning Brief - Let the best of International News come to you

Sign up and stay up to date with our daily newsletter.

You can unsubscribe at any time. By signing up you are agreeing to our Terms of Service and Privacy Policy.

It is worth noting that Microsoft is not rolling out this update to provide a superior user experience or to let users decide which app they want to get rid of. Instead. the tech behemoth is making this change to comply with The Digital Markets Act (DMA).

DMA, which was announced by the European Union, urges Microsoft to give its users the option to remove all basic apps. This act is slated to come into effect in March 2024 in the European Economic Area (EEA).
 
Top