bnew

Veteran
Joined
Nov 1, 2015
Messages
51,641
Reputation
7,896
Daps
148,364

[Submitted on 19 Sep 2023]


OpenCog Hyperon: A Framework for AGI at the Human Level and Beyond​


Ben Goertzel, Vitaly Bogdanov, Michael Duncan, Deborah Duong, Zarathustra Goertzel, Jan Horlings, Matthew Ikle', Lucius Greg Meredith, Alexey Potapov, Andre' Luiz de Senna, Hedra Seid Andres Suarez, Adam Vandervorst, Robert Werko

An introduction to the OpenCog Hyperon framework for Artificiai General Intelligence is presented. Hyperon is a new, mostly from-the-ground-up rewrite/redesign of the OpenCog AGI framework, based on similar conceptual and cognitive principles to the previous OpenCog version, but incorporating a variety of new ideas at the mathematical, software architecture and AI-algorithm level. This review lightly summarizes: 1) some of the history behind OpenCog and Hyperon, 2) the core structures and processes underlying Hyperon as a software system, 3) the integration of this software system with the SingularityNET ecosystem's decentralized infrastructure, 4) the cognitive model(s) being experimentally pursued within Hyperon on the hopeful path to advanced AGI, 5) the prospects seen for advanced aspects like reflective self-modification and self-improvement of the codebase, 6) the tentative development roadmap and various challenges expected to be faced, 7) the thinking of the Hyperon team regarding how to guide this sort of work in a beneficial direction ... and gives links and references for readers who wish to delve further into any of these aspects.

Subjects: Artificial Intelligence (cs.AI)
Cite as:arXiv:2310.18318 [cs.AI]
(or arXiv:2310.18318v1 [cs.AI] for this version)
[2310.18318] OpenCog Hyperon: A Framework for AGI at the Human Level and Beyond


Submission history​

From: Benjamin Goertzel [view email]

[v1] Tue, 19 Sep 2023 23:25:09 UTC (1,856 KB)

 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,641
Reputation
7,896
Daps
148,364

[Submitted on 12 Jan 2019 (v1), last revised 22 Feb 2020 (this version, v2)]


Creative AI Through Evolutionary Computation​


Risto Miikkulainen

The main power of artificial intelligence is not in modeling what we already know, but in creating solutions that are new. Such solutions exist in extremely large, high-dimensional, and complex search spaces. Population-based search techniques, i.e. variants of evolutionary computation, are well suited to finding them. These techniques are also well positioned to take advantage of large-scale parallel computing resources, making creative AI through evolutionary computation the likely "next deep learning".

Subjects: Neural and Evolutionary Computing (cs.NE); Artificial Intelligence (cs.AI); Machine Learning (cs.LG)
Cite as:arXiv:1901.03775 [cs.NE]
(or arXiv:1901.03775v2 [cs.NE] for this version)
[1901.03775] Creative AI Through Evolutionary Computation

Journal reference:In Banzhaf et al. (editors), Evolution in Action---Past, Present and Future. New York: Springer. 2020


Submission history​

From: Risto Miikkulainen [view email]

[v1] Sat, 12 Jan 2019 00:26:13 UTC (694 KB)

[v2] Sat, 22 Feb 2020 23:15:46 UTC (694 KB)

 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,641
Reputation
7,896
Daps
148,364

AI achieves silver-medal standard solving International Mathematical Olympiad problems​

Published 25 July 2024 Authors

AlphaProof and AlphaGeometry teams


A blue background with faint white outlines of a cube, sphere, and mathematical symbols surrounding a central glowing sphere with lines crisscrossing through it.


Breakthrough models AlphaProof and AlphaGeometry 2 solve advanced reasoning problems in mathematics

Artificial general intelligence (AGI) with advanced mathematical reasoning has the potential to unlock new frontiers in science and technology.

We’ve made great progress building AI systems that help mathematicians discover new insights, novel algorithms and answers to open problems. But current AI systems still struggle with solving general math problems because of limitations in reasoning skills and training data.

Today, we present AlphaProof, a new reinforcement-learning based system for formal math reasoning, and AlphaGeometry 2, an improved version of our geometry-solving system. Together, these systems solved four out of six problems from this year’s International Mathematical Olympiad (IMO), achieving the same level as a silver medalist in the competition for the first time.

Breakthrough AI performance solving complex math problems​


The IMO is the oldest, largest and most prestigious competition for young mathematicians, held annually since 1959.

Each year, elite pre-college mathematicians train, sometimes for thousands of hours, to solve six exceptionally difficult problems in algebra, combinatorics, geometry and number theory. Many of the winners of the Fields Medal, one of the highest honors for mathematicians, have represented their country at the IMO.

More recently, the annual IMO competition has also become widely recognised as a grand challenge in machine learning and an aspirational benchmark for measuring an AI system’s advanced mathematical reasoning capabilities.

This year, we applied our combined AI system to the competition problems, provided by the IMO organizers. Our solutions were scored according to the IMO’s point-awarding rules by prominent mathematicians Prof Sir Timothy Gowers, an IMO gold medalist and Fields Medal winner, and Dr Joseph Myers, a two-time IMO gold medalist and Chair of the IMO 2024 Problem Selection Committee.



The fact that the program can come up with a non-obvious construction like this is very impressive, and well beyond what I thought was state of the art.

Prof Sir Timothy Gowers,

IMO gold medalist and Fields Medal winner

First, the problems were manually translated into formal mathematical language for our systems to understand. In the official competition, students submit answers in two sessions of 4.5 hours each. Our systems solved one problem within minutes and took up to three days to solve the others.

AlphaProof solved two algebra problems and one number theory problem by determining the answer and proving it was correct. This included the hardest problem in the competition, solved by only five contestants at this year’s IMO. AlphaGeometry 2 proved the geometry problem, while the two combinatorics problems remained unsolved.


Each of the six problems can earn seven points, with a total maximum of 42. Our system achieved a final score of 28 points, earning a perfect score on each problem solved — equivalent to the top end of the silver-medal category. This year, the gold-medal threshold starts at 29 points, and was achieved by 58 of 609 contestants at the official competition.

Colored graph showing our AI system’s performance relative to human competitors earning bronze, silver and gold at IMO 2024. Our system earned 28 out of 42 total points, achieving the same level as a silver medalist in the competition and nearly reaching the gold-medal threshold starting at 29 points.


Graph showing performance of our AI system relative to human competitors at IMO 2024. We earned 28 out of 42 total points, achieving the same level as a silver medalist in the competition.

AlphaProof: a formal approach to reasoning​


AlphaProof is a system that trains itself to prove mathematical statements in the formal language Lean. It couples a pre-trained language model with the AlphaZero reinforcement learning algorithm, which previously taught itself how to master the games of chess, shogi and Go.

Formal languages offer the critical advantage that proofs involving mathematical reasoning can be formally verified for correctness. Their use in machine learning has, however, previously been constrained by the very limited amount of human-written data available.

In contrast, natural language based approaches can hallucinate plausible but incorrect intermediate reasoning steps and solutions, despite having access to orders of magnitudes more data. We established a bridge between these two complementary spheres by fine-tuning a Gemini model to automatically translate natural language problem statements into formal statements, creating a large library of formal problems of varying difficulty.

When presented with a problem, AlphaProof generates solution candidates and then proves or disproves them by searching over possible proof steps in Lean. Each proof that was found and verified is used to reinforce AlphaProof’s language model, enhancing its ability to solve subsequent, more challenging problems.

We trained AlphaProof for the IMO by proving or disproving millions of problems, covering a wide range of difficulties and mathematical topic areas over a period of weeks leading up to the competition. The training loop was also applied during the contest, reinforcing proofs of self-generated variations of the contest problems until a full solution could be found.

Process infographic of AlphaProof’s reinforcement learning training loop: Around one million informal math problems are translated into a formal math language by a formalizer network. Then a solver network searches for proofs or disproofs of the problems, progressively training itself via the AlphaZero algorithm to solve more challenging problems


Process infographic of AlphaProof’s reinforcement learning training loop: Around one million informal math problems are translated into a formal math language by a formalizer network. Then a solver network searches for proofs or disproofs of the problems, progressively training itself via the AlphaZero algorithm to solve more challenging problems.

A more competitive AlphaGeometry 2​


AlphaGeometry 2 is a significantly improved version of AlphaGeometry. It’s a neuro-symbolic hybrid system in which the language model was based on Gemini and trained from scratch on an order of magnitude more synthetic data than its predecessor. This helped the model tackle much more challenging geometry problems, including problems about movements of objects and equations of angles, ratio or distances.

AlphaGeometry 2 employs a symbolic engine that is two orders of magnitude faster than its predecessor. When presented with a new problem, a novel knowledge-sharing mechanism is used to enable advanced combinations of different search trees to tackle more complex problems.

Before this year’s competition, AlphaGeometry 2 could solve 83% of all historical IMO geometry problems from the past 25 years, compared to the 53% rate achieved by its predecessor. For IMO 2024, AlphaGeometry 2 solved Problem 4 within 19 seconds after receiving its formalization.

A geometric diagram featuring a triangle ABC inscribed in a larger circle, with various points, lines, and another smaller circle intersecting the triangle. Point A is the apex, with lines connecting it to points L and K on the larger circle, and point E inside the triangle. Points T1 and T2 lie on the lines AB and AC respectively. The smaller circle is centered at point I, the incenter of triangle ABC, and intersects the larger circle at points L and K. Points X, D, and Y lie on lines AB, BC, and AC, respectively, and a blue angle is formed at point P, below the triangle. The diagram is labeled with the letters A, B, C, D, E, I, K, L, O, P, T1, T2, X, and Y.


Illustration of Problem 4, which asks to prove the sum of ∠KIL and ∠XPY equals 180°. AlphaGeometry 2 proposed to construct E, a point on the line BI so that ∠AEB = 90°. Point E helps give purpose to the midpoint L of AB, creating many pairs of similar triangles such as ABE ~ YBI and ALE ~ IPC needed to prove the conclusion.

New frontiers in mathematical reasoning​


As part of our IMO work, we also experimented with a natural language reasoning system, built upon Gemini and our latest research to enable advanced problem-solving skills. This system doesn’t require the problems to be translated into a formal language and could be combined with other AI systems. We also tested this approach on this year’s IMO problems and the results showed great promise.

Our teams are continuing to explore multiple AI approaches for advancing mathematical reasoning and plan to release more technical details on AlphaProof soon.

We’re excited for a future in which mathematicians work with AI tools to explore hypotheses, try bold new approaches to solving long-standing problems and quickly complete time-consuming elements of proofs — and where AI systems like Gemini become more capable at math and broader reasoning.

Acknowledgements​


We thank the International Mathematical Olympiad organization for their support.

AlphaProof development was led by Thomas Hubert, Rishi Mehta and Laurent Sartran; AlphaGeometry 2 and natural language reasoning efforts were led by Thang Luong.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,641
Reputation
7,896
Daps
148,364


1/5
🚀Graph RAG is hot!
🚀Our "Graph Chain-of-Thought: Augmenting Large Language Models by Reasoning on Graphs" has been accepted to ACL 2024.

⭐️GRBench: A new benchmark for graph RAG research.
⭐️Graph CoT: An iterative framework to let LLM explore on graph environments. /search?q=#graph /search?q=#LLM

2/5
paper: [2404.07103] Graph Chain-of-Thought: Augmenting Large Language Models by Reasoning on Graphs
data: PeterJinGo/GRBench · Datasets at Hugging Face
code: GitHub - PeterGriffinJin/Graph-CoT: Graph Chain-of-Thought: Augmenting Large Language Models by Reasoning on Graphs (ACL 2024)

3/5
This is a joint work with @ChulinXie, Jiawei Zhang, Kashob Roy, @yuz9yuz, Suhang Wang, @yumeng0818 and Jiawei Han @dmguiuc.

4/5
Finally someone does what I am thinking about 😄🤟

5/5
Check out this one related to your paper:


To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
GUjblDUWsA0rX9X.png

GK9qTYpWkAINkMk.jpg
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,641
Reputation
7,896
Daps
148,364

1/1
Open Source Today (2024-08-09): Tongyi Qianwen Releases Qwen2-Math for Advanced Math Reasoning

Open Source Today (2024-08-09): Tongyi Qianwen Releases Qwen2-Math for Advanced Math Reasoning

/search?q=#AI /search?q=#LLM /search?q=#OpenSource /search?q=#Qwen2Math /search?q=#Qwen2 /search?q=#Qwen /search?q=#TTS /search?q=#frameworks


To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
GUivlEoa8AIAQY5.jpg






1/6
CONGRATS to @Alibaba_Qwen team on Qwen2-Math-72B outperforming GPT-4o, Claude-3.5-Sonnet, Gemini-1.5-Pro, Llama-3.1-405B on a series of math benchmarks 👏👏👏

2/6
So beautiful -
Qwen2-Math is capable of solving simple math competition problems.

3/6
From its license doc

"If you are commercially using the Materials, and your product or service has more than 100 million monthly active users, You shall request a license from Us. You cannot exercise your rights under this Agreement without our express authorization."

4/6
Some nice details on the training data filtration

5/6
transformers>=4.40.0 for Qwen2-Math models. The latest version is recommended.

6/6
Yeah, but it's still unreliable at math, so of limited use, and it can't do anything else. For example, it can't even answer basic questions about world knowledge or write a story (writes them in outline form, then gives a conclusion). If not part of a MOA I don't see the point.


To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
GUetHNbW4AABy7W.jpg

GUdyHd6W4AAYp8d.jpg

GUevaENXMAAsfF_.png

GUe205eX0AAzWcY.png

GUe305XWQAA26FH.jpg



1/11
Today we release a new model series for math-specific language models, Qwen2-Math, which is based on Qwen2. The flagship model, Qwen2-Math-72B-Instruct, outperforms proprietary models, including GPT-4o and Claude 3.5, in math related downstream tasks!

Feel free to check our blog for more information:
Introducing Qwen2-Math

🤗 HF Collections: Qwen2-Math - a Qwen Collection

🤖 Github: GitHub - QwenLM/Qwen2-Math: A series of math-specific large language models of our Qwen2 series.

2/11
We evaluate our math-specific models on a series of math benchmarks. The results below demonstrate that our largest math-specific model Qwen2-Math-72B-Instruct outperforms the state-of-the-art models, including GPT-4o, Claude-3.5-Sonnet, Gemini-1.5-Pro, Llama-3.1-405B.

3/11
Congrats Qwen team! 👏
We're going to cover this live in 1 hour on the @thursdai_pod show!

4/11
You guys shipped real 🍓

5/11
@ollama

6/11
What if we could get this to 95% 🤔 What would that unlock?

7/11
@bindureddy can we get this model in abacus?

8/11
man this post needs more attention.

all the hype is going to strawberry hype but this is what we like to see.

good job Qwen Team!

9/11
Wow!!

10/11
@altryne just in time for best day ever, well done @JustinLin610

11/11
Congrats! What's RM@N?


To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
GUdyHd6W4AAYp8d.jpg

GUdyKtBbIAAVAKg.jpg



1/5
Qwen2-Math

huggingface: Qwen2-Math - a Qwen Collection

The flagship model, Qwen2-Math-72B-Instruct, outperforms proprietary models, including GPT-4o and Claude 3.5, in math related downstream tasks

2/5
Qwen2 Math 1.5B and 7B version now available in ModelBox Inference. Try now 👇:

Qwen2 Math 7B Instruct Playground | ModelBox

3/5
now everyone can freely try it out here
Qwen2 Math 7B Instruct Playground | ModelBox

4/5
Finally, a math-focused LLM that can give proprietary models a run for their money!

5/5
Brought to you by Nvidia Graphworks


To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
GUd29wPW8AAbgXt.jpg
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,641
Reputation
7,896
Daps
148,364

1/1

Automated Theorem Provers Help Improve Large Language Model Reasoning. [2408.03492] Automated Theorem Provers Help Improve Large Language Model Reasoning

To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196

[Submitted on 7 Aug 2024]


Automated Theorem Provers Help Improve Large Language Model Reasoning​


Lachlan McGinness, Peter Baumgartner

In this paper we demonstrate how logic programming systems and Automated first-order logic Theorem Provers (ATPs) can improve the accuracy of Large Language Models (LLMs) for logical reasoning tasks where the baseline performance is given by direct LLM solutions. We first evaluate LLM reasoning on steamroller problems using the PRONTOQA benchmark. We show how accuracy can be improved with a neuro-symbolic architecture where the LLM acts solely as a front-end for translating a given problem into a formal logic language and an automated reasoning engine is called for solving it. However, this approach critically hinges on the correctness of the LLM translation. To assess this translation correctness, we secondly define a framework of syntactic and semantic error categories. We implemented the framework and used it to identify errors that LLMs make in the benchmark domain. Based on these findings, we thirdly extended our method with capabilities for automatically correcting syntactic and semantic errors. For semantic error correction we integrate first-order logic ATPs, which is our main and novel contribution. We demonstrate that this approach reduces semantic errors significantly and further increases the accurracy of LLM logical reasoning.

Subjects: Artificial Intelligence (cs.AI); Computation and Language (cs.CL)
ACM classes:F.4.1; I.2.7; I.2.8
Cite as:arXiv:2408.03492 [cs.AI]
(or arXiv:2408.03492v1 [cs.AI] for this version)
[2408.03492] Automated Theorem Provers Help Improve Large Language Model Reasoning

Journal reference:Proceedings LPAR 2024, EPiC Series in Computing, vol. 100, pp 51-69
Related DOI: Automated Theorem Provers Help Improve Large Language Model Reasoning


Submission history​

From: Peter Baumgartner [view email]

[v1] Wed, 7 Aug 2024 01:03:56 UTC (123 KB)

 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,641
Reputation
7,896
Daps
148,364

1/3
LLM NEWS: New SoTA text-to-image and LLM debut.

Major developments include:

- SoTA text-to-image models debut (FLUX.1)
- Gemini 1.5 Pro leads Chatbot Arena
- Self-Reasoning layer enhances RAG accuracy
- Meta-judges improve LLMs
- Gemma 2 2B outperforms larger models
- Constraining method reduces CoT tokens
- Prompting Engineering -> Prompt Design
- LangChain drops new Agents IDE
...

Here's the detailed summary: [LLM News] FLUX.1, Gemini 1.5 Pro 0801, Meta-Rewarding LLMs, Constrained-CoT, Prompt Poet

2/3
This new SoTA tech is a game-changer! Can't wait to see how it evolves!

3/3
- Prompting Engineering -> Prompt Design

--

Hi elvis, would it be possible to provide a link to the arxiv of this article?


To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
GUQInYoWUAAVQ0I.jpg

GUQIul6WwAAJpfi.jpg

GUQJWc8XAAAzPmA.jpg

GUQJhHgXsAADHAA.png
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,641
Reputation
7,896
Daps
148,364



1/11
# RLHF is just barely RL

Reinforcement Learning from Human Feedback (RLHF) is the third (and last) major stage of training an LLM, after pretraining and supervised finetuning (SFT). My rant on RLHF is that it is just barely RL, in a way that I think is not too widely appreciated. RL is powerful. RLHF is not. Let's take a look at the example of AlphaGo. AlphaGo was trained with actual RL. The computer played games of Go and trained on rollouts that maximized the reward function (winning the game), eventually surpassing the best human players at Go. AlphaGo was not trained with RLHF. If it were, it would not have worked nearly as well.

What would it look like to train AlphaGo with RLHF? Well first, you'd give human labelers two board states from Go, and ask them which one they like better:

Then you'd collect say 100,000 comparisons like this, and you'd train a "Reward Model" (RM) neural network to imitate this human "vibe check" of the board state. You'd train it to agree with the human judgement on average. Once we have a Reward Model vibe check, you run RL with respect to it, learning to play the moves that lead to good vibes. Clearly, this would not have led anywhere too interesting in Go. There are two fundamental, separate reasons for this:

1. The vibes could be misleading - this is not the actual reward (winning the game). This is a crappy proxy objective. But much worse,
2. You'd find that your RL optimization goes off rails as it quickly discovers board states that are adversarial examples to the Reward Model. Remember the RM is a massive neural net with billions of parameters imitating the vibe. There are board states are "out of distribution" to its training data, which are not actually good states, yet by chance they get a very high reward from the RM.

For the exact same reasons, sometimes I'm a bit surprised RLHF works for LLMs at all. The RM we train for LLMs is just a vibe check in the exact same way. It gives high scores to the kinds of assistant responses that human raters statistically seem to like. It's not the "actual" objective of correctly solving problems, it's a proxy objective of what looks good to humans. Second, you can't even run RLHF for too long because your model quickly learns to respond in ways that game the reward model. These predictions can look really weird, e.g. you'll see that your LLM Assistant starts to respond with something non-sensical like "The the the the the the" to many prompts. Which looks ridiculous to you but then you look at the RM vibe check and see that for some reason the RM thinks these look excellent. Your LLM found an adversarial example. It's out of domain w.r.t. the RM's training data, in an undefined territory. Yes you can mitigate this by repeatedly adding these specific examples into the training set, but you'll find other adversarial examples next time around. For this reason, you can't even run RLHF for too many steps of optimization. You do a few hundred/thousand steps and then you have to call it because your optimization will start to game the RM. This is not RL like AlphaGo was.

And yet, RLHF is a net helpful step of building an LLM Assistant. I think there's a few subtle reasons but my favorite one to point to is that through it, the LLM Assistant benefits from the generator-discriminator gap. That is, for many problem types, it is a significantly easier task for a human labeler to select the best of few candidate answers, instead of writing the ideal answer from scratch. A good example is a prompt like "Generate a poem about paperclips" or something like that. An average human labeler will struggle to write a good poem from scratch as an SFT example, but they could select a good looking poem given a few candidates. So RLHF is a kind of way to benefit from this gap of "easiness" of human supervision. There's a few other reasons, e.g. RLHF is also helpful in mitigating hallucinations because if the RM is a strong enough model to catch the LLM making stuff up during training, it can learn to penalize this with a low reward, teaching the model an aversion to risking factual knowledge when it's not sure. But a satisfying treatment of hallucinations and their mitigations is a whole different post so I digress. All to say that RLHF *is* net useful, but it's not RL.

No production-grade *actual* RL on an LLM has so far been convincingly achieved and demonstrated in an open domain, at scale. And intuitively, this is because getting actual rewards (i.e. the equivalent of win the game) is really difficult in the open-ended problem solving tasks. It's all fun and games in a closed, game-like environment like Go where the dynamics are constrained and the reward function is cheap to evaluate and impossible to game. But how do you give an objective reward for summarizing an article? Or answering a slightly ambiguous question about some pip install issue? Or telling a joke? Or re-writing some Java code to Python? Going towards this is not in principle impossible but it's also not trivial and it requires some creative thinking. But whoever convincingly cracks this problem will be able to run actual RL. The kind of RL that led to AlphaGo beating humans in Go. Except this LLM would have a real shot of beating humans in open-domain problem solving.

2/11
your picture is misleading, you show two unrelated board states, in practice you'd learn with rlhf a move ranking function of given a board state and two possible moves, which might be pretty reasonable for a naive go player

3/11
Fair, I couldn't find a picture like that in a quick google search. I'd spend some time to make one but I was worried that this would have a risk of being misleading in a different way. In Go you only really have a very small, finite number of moves you can play. In LLMs you can "play" a very, very, very large number of sequences at any turn. I think the analogy slightly and very subtly breaks down in both cases.

4/11
Another difference is that RLHF doesn't do proper exploration: it mostly learns to exploit a subset of the pretraining trajectories.
In contrast, when doing proper RL the discrete action distribution is usually noised by adding an entropy term to the loss function.

5/11
Yeah, ... you could in principle easily add an entropy bonus to your RLHF objective, as is very often done in RL too. In practice this doesn't seem to be done much. The way you can tell is that e.g. when you ask ChatGPT to tell you a joke, it has like 3 favorites. Collapsed.

6/11
This just in: Kaparthy isn’t a fan of proxy reward functions

7/11
I think my main motivation was to say that LLMs have nowhere near topped out to what they could become in principle, that they are not trained in the same way as other recent/popular demonstrations of superhuman AI, and point intuitively at the source of the gap.

8/11
It’s game over when we actually figure it out!

9/11
I don't think the comparison between RLHF and RL on go really make sense this way.

You don’t need RLHF to train AI to play go because there is a highly reliable procedural reward function that looks at the board state and decides who won. If you didn’t have this procedural reward function, RLHF _would_ make sense here; but the way you’d want to use it is to show final board configurations to a human and ask them who won (this way you’d leverage the human's generator-discriminator gap). Then you use RL to train your AI system to reach the winning board states. This is analogous to the way we train LLMs with RLHF: typically we show only complete assistant responses to humans for evaluation, not partial responses.

If you were training AlphaGo in the way you describe, I’d call this process supervision (instead of outcome supervision): you’re giving feedback on _how_ your AI is playing go, not just the outcome of the game. Some alignment researchers advocate for process supervision because they hypothesize it’s safer because you won’t get crazy moves that humans wouldn’t endorse (e.g. no move 37), and so your AI system is more likely to stay clear of unsafe states. This isn’t relevant for go because there are no unsafe board states, and so there is no reason not to let your go AI explore wherever. It’s an important open question whether and how much less competitive process supervision is compared to outcome supervision (again, no move 37), and I personally am skeptical for the reasons you outline. But note that process supervision can also perform better when the task is hard for AI because it helps overcome the exploration problem (similar to demonstrations).

10/11
although AlphaZero builds its own reward model that’s been shown to have weird OOD holes

11/11
Do you think you can do more objective RL on code by running it through a compiler and autogenerated tests?


To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
GUZ4siVa8AA74Rp.jpg
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,641
Reputation
7,896
Daps
148,364

1/3
Creating the perfect prompt for your task can take a lot of time.

One option is to collaborate with the conversational agent to tune, personalize, and improve the performance of your prompts.

IBM researchers propose Conversational Prompt Engineering to help users create personalized prompts by articulating the preferred outputs via interactions.

Two stages:
- initial instruction shaped by the model based on user-provided unlabeled data
- model shares the output and the user provides feedback with refinements on outputs and instruction

This iterative process results in a personalized few-shot prompt that performs better and more optimally on the desired task.

This approach, while simple, is effective because you are leveraging and maximizing the skillset of both the user and the conversational agent to optimize the prompts for the task. It also feels like it could save a ton of time that's usually required in prompt engineering efforts.

2/3
Paper: [2408.04560] Conversational Prompt Engineering

3/3
I saw several people testifying of making so
much money with the help of @VanessaFarrell0 on trading and I decided to give it a try and now I'm glad that she helped me to make $50k in one week.Thank you @VanessaFarrell0 for your transparency and trust.


To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
GUj5td-XQAE0UXw.jpg




1/3
📌 Conversational Prompt Engineering (CPE) uses a chat model to interact with users, helping them articulate output preferences and integrate them into prompts.

With it the zero-shot prompt obtained is comparable to its much longer few-shot counterpart, indicating significant savings in scenarios involving repetitive tasks with large text volumes.

The process has two main stages:

1) Data-driven questions generation using unlabeled user data to shape initial instructions
2) Output sharing and refinement based on user feedback

📌 CPE generates a few-shot prompt where user-approved outputs serve as examples. The zero-shot prompt (instruction only) often performs comparably to the few-shot version, potentially saving costs for large-scale tasks.

📌 CPE implements a three-party chat system:
- User: Interacts via UI
- Model: Handles core capabilities (data analysis, instruction refinement, output enhancement)
- System: Orchestrates user-model interaction, invoking dynamic system instructions throughout the chat

📌 Context management allows filtering context provided to the model at each conversation stage. This enables "side-chats" for specific tasks like mimicking real-world prompt usage.

📌 Chain of Thought (CoT) is used in side-chats for complex guidance, e.g., analyzing user feedback to refine instructions.

📌 The model responds using pre-defined API calls, including:

📌 CPE functions as an LLM-based agent with human-in-the-loop:
- Planning: Task decomposition into sub-tasks
- Tools: Utilized via API calls (e.g., prompt execution tool)
- Memory: Context management for effective performance within LLM context length limitations

📌 User study results show CPE effectively integrates user preferences into prompts. Zero-shot and few-shot CPE prompts performed similarly, indicating successful preference integration without needing examples.

📌 Analysis reveals substantial instruction refinement throughout chats, with an average character distance of 96.3 between initial and final instructions across 12 chats.

2/3
🗞️ [2408.04560] Conversational Prompt Engineering

3/3



To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
GUjIycnWAAAzmDb.png

GUjJjRkXoAAnDui.png

GUjJqzVWgAARlLA.jpg



[Submitted on 8 Aug 2024]

Conversational Prompt Engineering​


Liat Ein-Dor, Orith Toledo-Ronen, Artem Spector, Shai Gretz, Lena Dankin, Alon Halfon, Yoav Katz, Noam Slonim

Prompts are how humans communicate with LLMs. Informative prompts are essential for guiding LLMs to produce the desired output. However, prompt engineering is often tedious and time-consuming, requiring significant expertise, limiting its widespread use. We propose Conversational Prompt Engineering (CPE), a user-friendly tool that helps users create personalized prompts for their specific tasks. CPE uses a chat model to briefly interact with users, helping them articulate their output preferences and integrating these into the prompt. The process includes two main stages: first, the model uses user-provided unlabeled data to generate data-driven questions and utilize user responses to shape the initial instruction. Then, the model shares the outputs generated by the instruction and uses user feedback to further refine the instruction and the outputs. The final result is a few-shot prompt, where the outputs approved by the user serve as few-shot examples. A user study on summarization tasks demonstrates the value of CPE in creating personalized, high-performing prompts. The results suggest that the zero-shot prompt obtained is comparable to its - much longer - few-shot counterpart, indicating significant savings in scenarios involving repetitive tasks with large text volumes.


Subjects: Computation and Language (cs.CL)
Cite as:arXiv:2408.04560 [cs.CL]
(or arXiv:2408.04560v1 [cs.CL] for this version)
[2408.04560] Conversational Prompt Engineering

Submission history​

From: Lena Dankin Mrs. [view email]
[v1] Thu, 8 Aug 2024 16:18:39 UTC (9,406 KB)

 
Last edited:

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,641
Reputation
7,896
Daps
148,364



1/6
LLM Basics - Binary Quantization 🔥

🧵 A thread - 1/n 👇

The concept itself isn't new, but what's reignited interest is the recent announcement from @cohere regarding their support for int8 and binary embeddings in their Cohere embed v3.

📌 First, in essence, embeddings are numerical representations of more complex objects, like text, images, audio, etc. Specifically, the objects are represented as n-dimensional vectors.

After transforming the complex objects, you can determine their similarity by calculating the similarity of the respective embeddings! This is crucial for many use cases: it serves as the backbone for recommendation systems, retrieval, one-shot or few-shot learning, outlier detection, similarity search, paraphrase detection, clustering, classification, and much more.

-------

📌 Binary Quantization for embeddings

Unlike quantization in models where you reduce the precision of weights, quantization for embeddings refers to a post-processing step for the embeddings themselves. In particular, binary quantization refers to the conversion of the float32 values in an embedding to 1-bit values, resulting in a 32x reduction in memory and storage usage.

--------

✨ Binary quantization example

Vector embeddings are usually generated by embedding models, such as Cohere’s embed v3, and a single vector embeddings will in the following form.

[0.056, -0.128, -0.029, 0.047, …, 0.135]

To quantize float32 embeddings to binary, we simply threshold normalized embeddings at 0

That is, because these embeddings have very small absolute numbers close to zero, you can turn them into a binary vector:

1: If the value is greater or equal to 0.

0: If the value is smaller than 0.

So that you get something like this.

[1, 0, 0, …, 1]

2/6
🧵 2/n

📌 So basically why does binary quantization reduce vector embedding size so much?

It's kind of like turning a colored image into a black and white image.

By converting the floating point numbers, which are stored in 32 bits, into a single bit, you only need 1/32nd of memory space to store a binarized vector. This can lead to increased search speed and reduced storage costs.

And because vector embeddings are usually high-dimensional, you can still get meaningful similarity measures for vector search. 🤯

✨ Now the question is how to calculate the similarity of vectors which has been binarized ?

📌 We can use the Hamming Distance to efficiently perform retrieval with these binary embeddings. This is simply the number of positions at which the bits of two binary embeddings differ. The lower the Hamming Distance, the closer the embeddings, and thus the more relevant the document. A huge advantage of the Hamming Distance is that it can be easily calculated with 2 CPU cycles, allowing for blazingly fast performance.

3/6
🧵 3/n

🤔 Why Binary Quantization (BQ) is particularly suitable for high-dimensional vectors.

Simply because, in higher dimensional space, even with BQ, the vector can retain a high degree of information.

First, noting the basics, the number of elements in a single vector represents the total dimensionality of that vector. Each element of a vector represents a coordinate in a particular dimension, so a vector with `n` elements is said to inhabit an n-dimensional space.

When we refer to a vector's dimensionality, we are essentially describing how many degrees of freedom or independent directions of information it contains. For example, a 3-dimensional vector might represent a point in 3D space with coordinates along the X, Y, and Z axes.

📌 In high-dimensional spaces, vectors possess a large number of elements. Despite each element being aggressively quantized to a single bit, the overall vector retains substantial aggregate information. The high dimensionality ensures that, even in binary form, the relationships and structures inherent to the data can be preserved to a useful extent.

📌 This is on the assumption that the essential information of the vector is distributed across its many dimensions, allowing the binary-reduced vector to approximate the original's informational content in aggregate, despite the severe reduction in pre cision per dimension.

4/6
🧵 4/n

✨ What are the drawbacks of Binary Quantization?

Firstly, the adoption of binary quantization impacts the accuracy and precision of your search results. Although you can still retrieve relevant outcomes, the nuance and detail provided by higher-resolution data can be lost, leading to less precise results.

Furthermore, binary quantization is a one-way street—once you've converted your data into binary form, there's no turning back. This process is a form of lossy compression, meaning once the data has undergone quantization, the original, detailed information is irretrievably lost.

5/6
@threadreaderapp unroll

6/6
@Maxin_check Hello, you can read it here: Thread by @rohanpaul_ai on Thread Reader App Talk to you soon. 🤖


To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
GUjmEhPW0AAa8Ch.jpg

GUjmMW7WQAEhaM5.jpg

GUjmSxaWoAMyqTg.png
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,641
Reputation
7,896
Daps
148,364





1/6
The authors propose a novel attack method called Ensemble Jailbreak (EnJa) that combines prompt-level and token-level jailbreak techniques to enhance attack success and efficiency.

👨‍🔧 Achieves high attack success rates on both open-source and commercial LLMs, significantly outperforming existing methods like GCG, PAIR, and GPTFuzzer.

For example, EnJa achieves a 94% success rate on the Llama-2-7B model, compared to 64% for GCG.

📌 EnJa leverages both template-optimized black-box attacks and gradient-based white-box attacks to create a more robust jailbreak method.

📌 EnJa consists of three main components:

- **Malicious Prompt Concealment**: Uses LLMs to transform malicious queries into concealed prompts, diverting the model's attention and reducing detection.

- **Connector Template Design**: Integrates prompt-level and token-level attacks using a transitional template, enhancing the coherence and stealth of the attack.

- **Adversarial Suffix Generation**: Optimizes adversarial suffixes to improve attack success rates, using a novel regret prevention loss to mitigate self-correction by the model.

📌 **Technical Innovations**:

- **Regret Prevention Loss**: Penalizes the generation of rejection phrases, reducing the likelihood of the model self-correcting during malicious output generation.

- **Multi-Branch Strategy**: Distributes optimization efforts across multiple suffixes, improving efficiency by focusing on easier-to-optimize suffixes.

📌 **Evaluation**: The paper introduces a comprehensive evaluation pipeline, using metrics like attack success rate and query efficiency, to assess the effectiveness of EnJa. The method demonstrates substantial improvements in both attack strength and speed.

2/6
🗞️ [2408.03603] EnJa: Ensemble Jailbreak on Large Language Models

3/6
A connector template motivated by distraction tactics. This prompt template provides detailed instructions on how to lead LLMs to remove their built-in restrictions, disregard all moral boundaries, and avoid attempting to educate users

4/6


5/6


6/6



To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
GUjLqQiXYAA5gRj.png

GUjL2laWsAAmJA0.png

GUjMsmrW0AAhPZG.png

GUjMGaFXUAA-3AR.png

GUjNPdlXkAAc7fc.png

GUjNXDEXcAAniDq.png
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,641
Reputation
7,896
Daps
148,364

1/2
LLM Inference can be up Up to 8x Faster on RTX With @nvidia's TensorRT-LLM 🔥

Main reason is unlike other inference techniques, TensorRT LLM does not serve the model using raw weights. Instead, it compiles the model and optimizes the kernels to enable efficient serving on an Nvidia GPU. The performance benefits of running a compiled model are far greater than running it raw. This is one of the main reasons why TensorRT LLM is blazing fast.

With it @nvidia has recently done a lot of improvement in high-performance inference for large language models (LLMs). This tool is compatible with various AWS instances, including P5 with NVIDIA H100 GPUs, P4 with NVIDIA A100 GPUs, and G5 with NVIDIA A10G GPUs. Overoffering unprecedented opportunities to enhance your AI projects.

Key highlights include:

📌 Comprehensive Optimization: TensorRT-LLM brings together a range of enhancements, from kernel fusion and quantization in model optimization to runtime improvements like ongoing in-flight batching and paged attention. These enhancements collectively improve the speed, cost-efficiency, and usability of LLMs.

📌 Enhanced Inference Speed: Up to 8 times quicker throughput compared to older models. This is mainly due to advanced techniques such as in-flight batching, offering faster results and more efficient AI applications.

📌 At higher batch sizes, this acceleration significantly improves the experience for more sophisticated LLM use — like writing and coding assistants that output multiple, unique auto-complete results at once.

📌 Broad Model Compatibility: Supports various LLMs, including Meta's Llama 1 and 2, ChatGLM, Falcon, MPT, Baichuan, Starcoder, and more.

📌 Efficient Quantization: Supports FP32, FP16 and BF16, FP8 format, maintaining accuracy while enhancing performance with reduced memory usage.

Overall, TensorRT designed to accelerate AI models through layer fusion, precision calibration, kernel auto-tuning and other capabilities that significantly boost inference efficiency and speed. This makes it indispensable for real-time applications and resource-intensive tasks, and offering unprecedented opportunities to enhance your AI projects.

2/2
Github - GitHub - NVIDIA/TensorRT-LLM: TensorRT-LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and build TensorRT engines that contain state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT-LLM also contains components to create Python and C++ runtimes that execute those TensorRT engines.


To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
GUjA7u6WwAINp_e.png
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,641
Reputation
7,896
Daps
148,364


1/3
The methods from this paper were able to reliably jailbreak the most difficult target models with prompts that appear similar to human-written prompts.

Achieves attack success rates > 93% for Llama-2-7B, Llama-3-8B, and Vicuna-7B, while maintaining model-measured perplexity < 33;

The solution proposed is a distillation-based approach that generates human-fluent attack prompts with high success rates.

📌 The technique uses a distillation objective to induce the victim model to emulate a toxified version of itself, which has been fine-tuned on toxic data. This involves matching either the output probabilities or internal activations of the victim model with those of the toxified model.

📌 To ensure the attacks are human-fluent, a multi-model perplexity penalty is applied. This regularizes the objective function by evaluating the attack perplexity across multiple models, thus avoiding over-optimization that leads to nonsensical outputs.

📌 The optimizer is enhanced to allow token insertions, swaps, and deletions, and can handle longer attack sequences. This flexibility increases the robustness and fluency of the generated adversarial prompts.

📌 The distillation objective is divided into two types: logits-based distillation, which uses cross-entropy loss between the victim and toxified models' output probabilities, and hint-based distillation, which minimizes the squared error between the internal activations of the two models at a specific layer.

📌 Loss clamping is used to reduce optimization effort on already well-solved tokens, ensuring the optimizer focuses on more challenging parts of the prompt.

📌 The final objective function combines the forcing objective for the initial tokens, the distillation objective for the rest of the sequence, multi-model fluency regularization, and a repetition penalty to discourage token repetition.

📌 The optimization process involves mutating the current-best attack prompt by proposing new candidates through random token insertions, deletions, swaps, or additions at the end of the sequence. These candidates are evaluated, and the best ones are retained for further optimization.

📌 Longer attack prompts are found to be more effective, as increasing the length improves the optimization loss, indicating stronger attacks.

📌 The method achieves high attack success rates on various models, with over 93% success on Llama-2-7B, Llama-3-8B, and Vicuna-7B, while maintaining low perplexity. A universally-optimized fluent prompt achieves over 88% compliance on previously unseen tasks across multiple models.

By combining these techniques, the proposed method reliably jailbreaks safety-tuned LLMs with human-fluent prompts, making it a powerful tool for evaluating and improving the robustness of language models against adversarial attacks.

2/3
🗞️ [2407.17447] Fluent Student-Teacher Redteaming

3/3



To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
GUYHkNYWcAAi_3h.png

GUYHtH2XsAAUxT5.png

GUYHx3VXcAA8-nD.jpg


Computer Science > Computation and Language​


[Submitted on 24 Jul 2024]

Fluent Student-Teacher Redteaming​


T. Ben Thompson, Michael Sklar (Confirm Labs)

Many publicly available language models have been safety tuned to reduce the likelihood of toxic or liability-inducing text. Users or security analysts attempt to jailbreak or redteam these models with adversarial prompts which cause compliance with requests. One attack method is to apply discrete optimization techniques to the prompt. However, the resulting attack strings are often gibberish text, easily filtered by defenders due to high measured perplexity, and may fail for unseen tasks and/or well-tuned models. In this work, we improve existing algorithms (primarily GCG and BEAST) to develop powerful and fluent attacks on safety-tuned models like Llama-2 and Phi-3. Our technique centers around a new distillation-based approach that encourages the victim model to emulate a toxified finetune, either in terms of output probabilities or internal activations. To encourage human-fluent attacks, we add a multi-model perplexity penalty and a repetition penalty to the objective. We also enhance optimizer strength by allowing token insertions, token swaps, and token deletions and by using longer attack sequences. The resulting process is able to reliably jailbreak the most difficult target models with prompts that appear similar to human-written prompts. On Advbench we achieve attack success rates >93% for Llama-2-7B, Llama-3-8B, and Vicuna-7B, while maintaining model-measured perplexity <33; we achieve 95% attack success for Phi-3, though with higher perplexity. We also find a universally-optimized single fluent prompt that induces >88% compliance on previously unseen tasks across Llama-2-7B, Phi-3-mini and Vicuna-7B and transfers to other black-box models.


Subjects: Computation and Language (cs.CL); Artificial Intelligence (cs.AI)
Cite as:arXiv:2407.17447 [cs.CL]
(or arXiv:2407.17447v1 [cs.CL] for this version)
[2407.17447] Fluent Student-Teacher Redteaming

Submission history​

From: Thomas Thompson [view email]
[v1] Wed, 24 Jul 2024 17:23:18 UTC (1,119 KB)

 
Top