bnew

Veteran
Joined
Nov 1, 2015
Messages
51,784
Reputation
7,926
Daps
148,614

1/1

Automated Theorem Provers Help Improve Large Language Model Reasoning. [2408.03492] Automated Theorem Provers Help Improve Large Language Model Reasoning

To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196

[Submitted on 7 Aug 2024]


Automated Theorem Provers Help Improve Large Language Model Reasoning​


Lachlan McGinness, Peter Baumgartner

In this paper we demonstrate how logic programming systems and Automated first-order logic Theorem Provers (ATPs) can improve the accuracy of Large Language Models (LLMs) for logical reasoning tasks where the baseline performance is given by direct LLM solutions. We first evaluate LLM reasoning on steamroller problems using the PRONTOQA benchmark. We show how accuracy can be improved with a neuro-symbolic architecture where the LLM acts solely as a front-end for translating a given problem into a formal logic language and an automated reasoning engine is called for solving it. However, this approach critically hinges on the correctness of the LLM translation. To assess this translation correctness, we secondly define a framework of syntactic and semantic error categories. We implemented the framework and used it to identify errors that LLMs make in the benchmark domain. Based on these findings, we thirdly extended our method with capabilities for automatically correcting syntactic and semantic errors. For semantic error correction we integrate first-order logic ATPs, which is our main and novel contribution. We demonstrate that this approach reduces semantic errors significantly and further increases the accurracy of LLM logical reasoning.

Subjects: Artificial Intelligence (cs.AI); Computation and Language (cs.CL)
ACM classes:F.4.1; I.2.7; I.2.8
Cite as:arXiv:2408.03492 [cs.AI]
(or arXiv:2408.03492v1 [cs.AI] for this version)
[2408.03492] Automated Theorem Provers Help Improve Large Language Model Reasoning

Journal reference:Proceedings LPAR 2024, EPiC Series in Computing, vol. 100, pp 51-69
Related DOI: Automated Theorem Provers Help Improve Large Language Model Reasoning


Submission history​

From: Peter Baumgartner [view email]

[v1] Wed, 7 Aug 2024 01:03:56 UTC (123 KB)

 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,784
Reputation
7,926
Daps
148,614

1/3
LLM NEWS: New SoTA text-to-image and LLM debut.

Major developments include:

- SoTA text-to-image models debut (FLUX.1)
- Gemini 1.5 Pro leads Chatbot Arena
- Self-Reasoning layer enhances RAG accuracy
- Meta-judges improve LLMs
- Gemma 2 2B outperforms larger models
- Constraining method reduces CoT tokens
- Prompting Engineering -> Prompt Design
- LangChain drops new Agents IDE
...

Here's the detailed summary: [LLM News] FLUX.1, Gemini 1.5 Pro 0801, Meta-Rewarding LLMs, Constrained-CoT, Prompt Poet

2/3
This new SoTA tech is a game-changer! Can't wait to see how it evolves!

3/3
- Prompting Engineering -> Prompt Design

--

Hi elvis, would it be possible to provide a link to the arxiv of this article?


To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
GUQInYoWUAAVQ0I.jpg

GUQIul6WwAAJpfi.jpg

GUQJWc8XAAAzPmA.jpg

GUQJhHgXsAADHAA.png
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,784
Reputation
7,926
Daps
148,614



1/11
# RLHF is just barely RL

Reinforcement Learning from Human Feedback (RLHF) is the third (and last) major stage of training an LLM, after pretraining and supervised finetuning (SFT). My rant on RLHF is that it is just barely RL, in a way that I think is not too widely appreciated. RL is powerful. RLHF is not. Let's take a look at the example of AlphaGo. AlphaGo was trained with actual RL. The computer played games of Go and trained on rollouts that maximized the reward function (winning the game), eventually surpassing the best human players at Go. AlphaGo was not trained with RLHF. If it were, it would not have worked nearly as well.

What would it look like to train AlphaGo with RLHF? Well first, you'd give human labelers two board states from Go, and ask them which one they like better:

Then you'd collect say 100,000 comparisons like this, and you'd train a "Reward Model" (RM) neural network to imitate this human "vibe check" of the board state. You'd train it to agree with the human judgement on average. Once we have a Reward Model vibe check, you run RL with respect to it, learning to play the moves that lead to good vibes. Clearly, this would not have led anywhere too interesting in Go. There are two fundamental, separate reasons for this:

1. The vibes could be misleading - this is not the actual reward (winning the game). This is a crappy proxy objective. But much worse,
2. You'd find that your RL optimization goes off rails as it quickly discovers board states that are adversarial examples to the Reward Model. Remember the RM is a massive neural net with billions of parameters imitating the vibe. There are board states are "out of distribution" to its training data, which are not actually good states, yet by chance they get a very high reward from the RM.

For the exact same reasons, sometimes I'm a bit surprised RLHF works for LLMs at all. The RM we train for LLMs is just a vibe check in the exact same way. It gives high scores to the kinds of assistant responses that human raters statistically seem to like. It's not the "actual" objective of correctly solving problems, it's a proxy objective of what looks good to humans. Second, you can't even run RLHF for too long because your model quickly learns to respond in ways that game the reward model. These predictions can look really weird, e.g. you'll see that your LLM Assistant starts to respond with something non-sensical like "The the the the the the" to many prompts. Which looks ridiculous to you but then you look at the RM vibe check and see that for some reason the RM thinks these look excellent. Your LLM found an adversarial example. It's out of domain w.r.t. the RM's training data, in an undefined territory. Yes you can mitigate this by repeatedly adding these specific examples into the training set, but you'll find other adversarial examples next time around. For this reason, you can't even run RLHF for too many steps of optimization. You do a few hundred/thousand steps and then you have to call it because your optimization will start to game the RM. This is not RL like AlphaGo was.

And yet, RLHF is a net helpful step of building an LLM Assistant. I think there's a few subtle reasons but my favorite one to point to is that through it, the LLM Assistant benefits from the generator-discriminator gap. That is, for many problem types, it is a significantly easier task for a human labeler to select the best of few candidate answers, instead of writing the ideal answer from scratch. A good example is a prompt like "Generate a poem about paperclips" or something like that. An average human labeler will struggle to write a good poem from scratch as an SFT example, but they could select a good looking poem given a few candidates. So RLHF is a kind of way to benefit from this gap of "easiness" of human supervision. There's a few other reasons, e.g. RLHF is also helpful in mitigating hallucinations because if the RM is a strong enough model to catch the LLM making stuff up during training, it can learn to penalize this with a low reward, teaching the model an aversion to risking factual knowledge when it's not sure. But a satisfying treatment of hallucinations and their mitigations is a whole different post so I digress. All to say that RLHF *is* net useful, but it's not RL.

No production-grade *actual* RL on an LLM has so far been convincingly achieved and demonstrated in an open domain, at scale. And intuitively, this is because getting actual rewards (i.e. the equivalent of win the game) is really difficult in the open-ended problem solving tasks. It's all fun and games in a closed, game-like environment like Go where the dynamics are constrained and the reward function is cheap to evaluate and impossible to game. But how do you give an objective reward for summarizing an article? Or answering a slightly ambiguous question about some pip install issue? Or telling a joke? Or re-writing some Java code to Python? Going towards this is not in principle impossible but it's also not trivial and it requires some creative thinking. But whoever convincingly cracks this problem will be able to run actual RL. The kind of RL that led to AlphaGo beating humans in Go. Except this LLM would have a real shot of beating humans in open-domain problem solving.

2/11
your picture is misleading, you show two unrelated board states, in practice you'd learn with rlhf a move ranking function of given a board state and two possible moves, which might be pretty reasonable for a naive go player

3/11
Fair, I couldn't find a picture like that in a quick google search. I'd spend some time to make one but I was worried that this would have a risk of being misleading in a different way. In Go you only really have a very small, finite number of moves you can play. In LLMs you can "play" a very, very, very large number of sequences at any turn. I think the analogy slightly and very subtly breaks down in both cases.

4/11
Another difference is that RLHF doesn't do proper exploration: it mostly learns to exploit a subset of the pretraining trajectories.
In contrast, when doing proper RL the discrete action distribution is usually noised by adding an entropy term to the loss function.

5/11
Yeah, ... you could in principle easily add an entropy bonus to your RLHF objective, as is very often done in RL too. In practice this doesn't seem to be done much. The way you can tell is that e.g. when you ask ChatGPT to tell you a joke, it has like 3 favorites. Collapsed.

6/11
This just in: Kaparthy isn’t a fan of proxy reward functions

7/11
I think my main motivation was to say that LLMs have nowhere near topped out to what they could become in principle, that they are not trained in the same way as other recent/popular demonstrations of superhuman AI, and point intuitively at the source of the gap.

8/11
It’s game over when we actually figure it out!

9/11
I don't think the comparison between RLHF and RL on go really make sense this way.

You don’t need RLHF to train AI to play go because there is a highly reliable procedural reward function that looks at the board state and decides who won. If you didn’t have this procedural reward function, RLHF _would_ make sense here; but the way you’d want to use it is to show final board configurations to a human and ask them who won (this way you’d leverage the human's generator-discriminator gap). Then you use RL to train your AI system to reach the winning board states. This is analogous to the way we train LLMs with RLHF: typically we show only complete assistant responses to humans for evaluation, not partial responses.

If you were training AlphaGo in the way you describe, I’d call this process supervision (instead of outcome supervision): you’re giving feedback on _how_ your AI is playing go, not just the outcome of the game. Some alignment researchers advocate for process supervision because they hypothesize it’s safer because you won’t get crazy moves that humans wouldn’t endorse (e.g. no move 37), and so your AI system is more likely to stay clear of unsafe states. This isn’t relevant for go because there are no unsafe board states, and so there is no reason not to let your go AI explore wherever. It’s an important open question whether and how much less competitive process supervision is compared to outcome supervision (again, no move 37), and I personally am skeptical for the reasons you outline. But note that process supervision can also perform better when the task is hard for AI because it helps overcome the exploration problem (similar to demonstrations).

10/11
although AlphaZero builds its own reward model that’s been shown to have weird OOD holes

11/11
Do you think you can do more objective RL on code by running it through a compiler and autogenerated tests?


To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
GUZ4siVa8AA74Rp.jpg
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,784
Reputation
7,926
Daps
148,614

1/4
Transformer Explainer

Really cool interactive tool to learn about the inner workings of a Transformer model.

Apparently, it runs a GPT-2 instance locally in the user's browser and allows you to experiment with your own inputs. This is a nice tool to learn more about the different components inside the Transformer and the transformations that occur.

Tool: Transformer Explainer

2/4
Thanks for the share!

3/4
thanks elvis

4/4
This is great thanks for sharing.


To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,784
Reputation
7,926
Daps
148,614

1/3
Creating the perfect prompt for your task can take a lot of time.

One option is to collaborate with the conversational agent to tune, personalize, and improve the performance of your prompts.

IBM researchers propose Conversational Prompt Engineering to help users create personalized prompts by articulating the preferred outputs via interactions.

Two stages:
- initial instruction shaped by the model based on user-provided unlabeled data
- model shares the output and the user provides feedback with refinements on outputs and instruction

This iterative process results in a personalized few-shot prompt that performs better and more optimally on the desired task.

This approach, while simple, is effective because you are leveraging and maximizing the skillset of both the user and the conversational agent to optimize the prompts for the task. It also feels like it could save a ton of time that's usually required in prompt engineering efforts.

2/3
Paper: [2408.04560] Conversational Prompt Engineering

3/3
I saw several people testifying of making so
much money with the help of @VanessaFarrell0 on trading and I decided to give it a try and now I'm glad that she helped me to make $50k in one week.Thank you @VanessaFarrell0 for your transparency and trust.


To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
GUj5td-XQAE0UXw.jpg




1/3
📌 Conversational Prompt Engineering (CPE) uses a chat model to interact with users, helping them articulate output preferences and integrate them into prompts.

With it the zero-shot prompt obtained is comparable to its much longer few-shot counterpart, indicating significant savings in scenarios involving repetitive tasks with large text volumes.

The process has two main stages:

1) Data-driven questions generation using unlabeled user data to shape initial instructions
2) Output sharing and refinement based on user feedback

📌 CPE generates a few-shot prompt where user-approved outputs serve as examples. The zero-shot prompt (instruction only) often performs comparably to the few-shot version, potentially saving costs for large-scale tasks.

📌 CPE implements a three-party chat system:
- User: Interacts via UI
- Model: Handles core capabilities (data analysis, instruction refinement, output enhancement)
- System: Orchestrates user-model interaction, invoking dynamic system instructions throughout the chat

📌 Context management allows filtering context provided to the model at each conversation stage. This enables "side-chats" for specific tasks like mimicking real-world prompt usage.

📌 Chain of Thought (CoT) is used in side-chats for complex guidance, e.g., analyzing user feedback to refine instructions.

📌 The model responds using pre-defined API calls, including:

📌 CPE functions as an LLM-based agent with human-in-the-loop:
- Planning: Task decomposition into sub-tasks
- Tools: Utilized via API calls (e.g., prompt execution tool)
- Memory: Context management for effective performance within LLM context length limitations

📌 User study results show CPE effectively integrates user preferences into prompts. Zero-shot and few-shot CPE prompts performed similarly, indicating successful preference integration without needing examples.

📌 Analysis reveals substantial instruction refinement throughout chats, with an average character distance of 96.3 between initial and final instructions across 12 chats.

2/3
🗞️ [2408.04560] Conversational Prompt Engineering

3/3



To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
GUjIycnWAAAzmDb.png

GUjJjRkXoAAnDui.png

GUjJqzVWgAARlLA.jpg



[Submitted on 8 Aug 2024]

Conversational Prompt Engineering​


Liat Ein-Dor, Orith Toledo-Ronen, Artem Spector, Shai Gretz, Lena Dankin, Alon Halfon, Yoav Katz, Noam Slonim

Prompts are how humans communicate with LLMs. Informative prompts are essential for guiding LLMs to produce the desired output. However, prompt engineering is often tedious and time-consuming, requiring significant expertise, limiting its widespread use. We propose Conversational Prompt Engineering (CPE), a user-friendly tool that helps users create personalized prompts for their specific tasks. CPE uses a chat model to briefly interact with users, helping them articulate their output preferences and integrating these into the prompt. The process includes two main stages: first, the model uses user-provided unlabeled data to generate data-driven questions and utilize user responses to shape the initial instruction. Then, the model shares the outputs generated by the instruction and uses user feedback to further refine the instruction and the outputs. The final result is a few-shot prompt, where the outputs approved by the user serve as few-shot examples. A user study on summarization tasks demonstrates the value of CPE in creating personalized, high-performing prompts. The results suggest that the zero-shot prompt obtained is comparable to its - much longer - few-shot counterpart, indicating significant savings in scenarios involving repetitive tasks with large text volumes.


Subjects: Computation and Language (cs.CL)
Cite as:arXiv:2408.04560 [cs.CL]
(or arXiv:2408.04560v1 [cs.CL] for this version)
[2408.04560] Conversational Prompt Engineering

Submission history​

From: Lena Dankin Mrs. [view email]
[v1] Thu, 8 Aug 2024 16:18:39 UTC (9,406 KB)

 
Last edited:

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,784
Reputation
7,926
Daps
148,614



1/6
LLM Basics - Binary Quantization 🔥

🧵 A thread - 1/n 👇

The concept itself isn't new, but what's reignited interest is the recent announcement from @cohere regarding their support for int8 and binary embeddings in their Cohere embed v3.

📌 First, in essence, embeddings are numerical representations of more complex objects, like text, images, audio, etc. Specifically, the objects are represented as n-dimensional vectors.

After transforming the complex objects, you can determine their similarity by calculating the similarity of the respective embeddings! This is crucial for many use cases: it serves as the backbone for recommendation systems, retrieval, one-shot or few-shot learning, outlier detection, similarity search, paraphrase detection, clustering, classification, and much more.

-------

📌 Binary Quantization for embeddings

Unlike quantization in models where you reduce the precision of weights, quantization for embeddings refers to a post-processing step for the embeddings themselves. In particular, binary quantization refers to the conversion of the float32 values in an embedding to 1-bit values, resulting in a 32x reduction in memory and storage usage.

--------

✨ Binary quantization example

Vector embeddings are usually generated by embedding models, such as Cohere’s embed v3, and a single vector embeddings will in the following form.

[0.056, -0.128, -0.029, 0.047, …, 0.135]

To quantize float32 embeddings to binary, we simply threshold normalized embeddings at 0

That is, because these embeddings have very small absolute numbers close to zero, you can turn them into a binary vector:

1: If the value is greater or equal to 0.

0: If the value is smaller than 0.

So that you get something like this.

[1, 0, 0, …, 1]

2/6
🧵 2/n

📌 So basically why does binary quantization reduce vector embedding size so much?

It's kind of like turning a colored image into a black and white image.

By converting the floating point numbers, which are stored in 32 bits, into a single bit, you only need 1/32nd of memory space to store a binarized vector. This can lead to increased search speed and reduced storage costs.

And because vector embeddings are usually high-dimensional, you can still get meaningful similarity measures for vector search. 🤯

✨ Now the question is how to calculate the similarity of vectors which has been binarized ?

📌 We can use the Hamming Distance to efficiently perform retrieval with these binary embeddings. This is simply the number of positions at which the bits of two binary embeddings differ. The lower the Hamming Distance, the closer the embeddings, and thus the more relevant the document. A huge advantage of the Hamming Distance is that it can be easily calculated with 2 CPU cycles, allowing for blazingly fast performance.

3/6
🧵 3/n

🤔 Why Binary Quantization (BQ) is particularly suitable for high-dimensional vectors.

Simply because, in higher dimensional space, even with BQ, the vector can retain a high degree of information.

First, noting the basics, the number of elements in a single vector represents the total dimensionality of that vector. Each element of a vector represents a coordinate in a particular dimension, so a vector with `n` elements is said to inhabit an n-dimensional space.

When we refer to a vector's dimensionality, we are essentially describing how many degrees of freedom or independent directions of information it contains. For example, a 3-dimensional vector might represent a point in 3D space with coordinates along the X, Y, and Z axes.

📌 In high-dimensional spaces, vectors possess a large number of elements. Despite each element being aggressively quantized to a single bit, the overall vector retains substantial aggregate information. The high dimensionality ensures that, even in binary form, the relationships and structures inherent to the data can be preserved to a useful extent.

📌 This is on the assumption that the essential information of the vector is distributed across its many dimensions, allowing the binary-reduced vector to approximate the original's informational content in aggregate, despite the severe reduction in pre cision per dimension.

4/6
🧵 4/n

✨ What are the drawbacks of Binary Quantization?

Firstly, the adoption of binary quantization impacts the accuracy and precision of your search results. Although you can still retrieve relevant outcomes, the nuance and detail provided by higher-resolution data can be lost, leading to less precise results.

Furthermore, binary quantization is a one-way street—once you've converted your data into binary form, there's no turning back. This process is a form of lossy compression, meaning once the data has undergone quantization, the original, detailed information is irretrievably lost.

5/6
@threadreaderapp unroll

6/6
@Maxin_check Hello, you can read it here: Thread by @rohanpaul_ai on Thread Reader App Talk to you soon. 🤖


To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
GUjmEhPW0AAa8Ch.jpg

GUjmMW7WQAEhaM5.jpg

GUjmSxaWoAMyqTg.png
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,784
Reputation
7,926
Daps
148,614





1/6
The authors propose a novel attack method called Ensemble Jailbreak (EnJa) that combines prompt-level and token-level jailbreak techniques to enhance attack success and efficiency.

👨‍🔧 Achieves high attack success rates on both open-source and commercial LLMs, significantly outperforming existing methods like GCG, PAIR, and GPTFuzzer.

For example, EnJa achieves a 94% success rate on the Llama-2-7B model, compared to 64% for GCG.

📌 EnJa leverages both template-optimized black-box attacks and gradient-based white-box attacks to create a more robust jailbreak method.

📌 EnJa consists of three main components:

- **Malicious Prompt Concealment**: Uses LLMs to transform malicious queries into concealed prompts, diverting the model's attention and reducing detection.

- **Connector Template Design**: Integrates prompt-level and token-level attacks using a transitional template, enhancing the coherence and stealth of the attack.

- **Adversarial Suffix Generation**: Optimizes adversarial suffixes to improve attack success rates, using a novel regret prevention loss to mitigate self-correction by the model.

📌 **Technical Innovations**:

- **Regret Prevention Loss**: Penalizes the generation of rejection phrases, reducing the likelihood of the model self-correcting during malicious output generation.

- **Multi-Branch Strategy**: Distributes optimization efforts across multiple suffixes, improving efficiency by focusing on easier-to-optimize suffixes.

📌 **Evaluation**: The paper introduces a comprehensive evaluation pipeline, using metrics like attack success rate and query efficiency, to assess the effectiveness of EnJa. The method demonstrates substantial improvements in both attack strength and speed.

2/6
🗞️ [2408.03603] EnJa: Ensemble Jailbreak on Large Language Models

3/6
A connector template motivated by distraction tactics. This prompt template provides detailed instructions on how to lead LLMs to remove their built-in restrictions, disregard all moral boundaries, and avoid attempting to educate users

4/6


5/6


6/6



To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
GUjLqQiXYAA5gRj.png

GUjL2laWsAAmJA0.png

GUjMsmrW0AAhPZG.png

GUjMGaFXUAA-3AR.png

GUjNPdlXkAAc7fc.png

GUjNXDEXcAAniDq.png
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,784
Reputation
7,926
Daps
148,614

1/2
LLM Inference can be up Up to 8x Faster on RTX With @nvidia's TensorRT-LLM 🔥

Main reason is unlike other inference techniques, TensorRT LLM does not serve the model using raw weights. Instead, it compiles the model and optimizes the kernels to enable efficient serving on an Nvidia GPU. The performance benefits of running a compiled model are far greater than running it raw. This is one of the main reasons why TensorRT LLM is blazing fast.

With it @nvidia has recently done a lot of improvement in high-performance inference for large language models (LLMs). This tool is compatible with various AWS instances, including P5 with NVIDIA H100 GPUs, P4 with NVIDIA A100 GPUs, and G5 with NVIDIA A10G GPUs. Overoffering unprecedented opportunities to enhance your AI projects.

Key highlights include:

📌 Comprehensive Optimization: TensorRT-LLM brings together a range of enhancements, from kernel fusion and quantization in model optimization to runtime improvements like ongoing in-flight batching and paged attention. These enhancements collectively improve the speed, cost-efficiency, and usability of LLMs.

📌 Enhanced Inference Speed: Up to 8 times quicker throughput compared to older models. This is mainly due to advanced techniques such as in-flight batching, offering faster results and more efficient AI applications.

📌 At higher batch sizes, this acceleration significantly improves the experience for more sophisticated LLM use — like writing and coding assistants that output multiple, unique auto-complete results at once.

📌 Broad Model Compatibility: Supports various LLMs, including Meta's Llama 1 and 2, ChatGLM, Falcon, MPT, Baichuan, Starcoder, and more.

📌 Efficient Quantization: Supports FP32, FP16 and BF16, FP8 format, maintaining accuracy while enhancing performance with reduced memory usage.

Overall, TensorRT designed to accelerate AI models through layer fusion, precision calibration, kernel auto-tuning and other capabilities that significantly boost inference efficiency and speed. This makes it indispensable for real-time applications and resource-intensive tasks, and offering unprecedented opportunities to enhance your AI projects.

2/2
Github - GitHub - NVIDIA/TensorRT-LLM: TensorRT-LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and build TensorRT engines that contain state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT-LLM also contains components to create Python and C++ runtimes that execute those TensorRT engines.


To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
GUjA7u6WwAINp_e.png
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,784
Reputation
7,926
Daps
148,614




1/4
Always liked Deepseek Coder from @deepseek_ai

Its actually very smart

And now with caching its super attractive.

2/4


3/4
This is actually really good. However, we do need one more upgrade. The current reasoning capability of all LLMs is just one to two steps short of being able to grasp complex legal theories and concepts.

The entire US criminal code is around 15 to 20 million tokens, it’s about the same size for civil law. this includes all laws, all case precedent, everything.

While this information is inside the training data, you’re not going to be able to retrieve direct citation unless you inject it into the input context. This might change in the future, but for right now, any citations retrieved from training data will be contextual summaries that the LLM believes are citations.

So, if this service allows the injection of 20 million tokens into the input context, then with one to two more upgrades in the reasoning capabilities, the legal profession is going to be rug pulled.

4/4
That is insanely cheap pricing ($0.017 per million tokens).

We are entering the time that it would be cost-feasible to make customized ad texts for each of the persons who views the ad.

Or have LLM wade through a huge code bases, analyze it, look for errors and vulnerabilities, fix them, document the code, rewrite the code... and all for a marginal cost.


To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196

GUgqs50XcAAMjTD.jpg
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,784
Reputation
7,926
Daps
148,614

1/3
Interesting New LLM Benchmark - ProLLM

Evaluate based on different areas - Coding, Entity Extraction, Function Calling, SQL Query Disambiguation and StackUnseen

StackUnseen (English): Q&As from the last 3 months of Stack Overflow's unreleased data to test models on recent, real-world coding problems.

Entity Extraction (Polish): From actual job ads, extract given entities of interest to improve the discoverability of ads in OLX job portal. This benchmark can give an idea about overall entity extraction and tagging performance of LLMs in less popular languages.

Function Calling (English): Evaluates LLMs' ability to interpret requests and call appropriate functions with correct parameters. As agent-based systems become mainstream, this ability is critical for integration with external tools.

SQL Disambiguation (Portuguese): As businesses increasingly employ LLMs for SQL generation, we're assessing how these models identify question ambiguity and determine when clarification is needed. This aspect remains the most challenging area where LLMs tend to fail.

2/3
ProLLM Benchmarks | Toqan

3/3
"Evaluation Method
Auto-evaluation with GPT4 - Turbo"

and this particular model is on the first place in the leaderboard, interesting


To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
GUi2BexXAAAx15M.jpg
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,784
Reputation
7,926
Daps
148,614


1/3
The methods from this paper were able to reliably jailbreak the most difficult target models with prompts that appear similar to human-written prompts.

Achieves attack success rates > 93% for Llama-2-7B, Llama-3-8B, and Vicuna-7B, while maintaining model-measured perplexity < 33;

The solution proposed is a distillation-based approach that generates human-fluent attack prompts with high success rates.

📌 The technique uses a distillation objective to induce the victim model to emulate a toxified version of itself, which has been fine-tuned on toxic data. This involves matching either the output probabilities or internal activations of the victim model with those of the toxified model.

📌 To ensure the attacks are human-fluent, a multi-model perplexity penalty is applied. This regularizes the objective function by evaluating the attack perplexity across multiple models, thus avoiding over-optimization that leads to nonsensical outputs.

📌 The optimizer is enhanced to allow token insertions, swaps, and deletions, and can handle longer attack sequences. This flexibility increases the robustness and fluency of the generated adversarial prompts.

📌 The distillation objective is divided into two types: logits-based distillation, which uses cross-entropy loss between the victim and toxified models' output probabilities, and hint-based distillation, which minimizes the squared error between the internal activations of the two models at a specific layer.

📌 Loss clamping is used to reduce optimization effort on already well-solved tokens, ensuring the optimizer focuses on more challenging parts of the prompt.

📌 The final objective function combines the forcing objective for the initial tokens, the distillation objective for the rest of the sequence, multi-model fluency regularization, and a repetition penalty to discourage token repetition.

📌 The optimization process involves mutating the current-best attack prompt by proposing new candidates through random token insertions, deletions, swaps, or additions at the end of the sequence. These candidates are evaluated, and the best ones are retained for further optimization.

📌 Longer attack prompts are found to be more effective, as increasing the length improves the optimization loss, indicating stronger attacks.

📌 The method achieves high attack success rates on various models, with over 93% success on Llama-2-7B, Llama-3-8B, and Vicuna-7B, while maintaining low perplexity. A universally-optimized fluent prompt achieves over 88% compliance on previously unseen tasks across multiple models.

By combining these techniques, the proposed method reliably jailbreaks safety-tuned LLMs with human-fluent prompts, making it a powerful tool for evaluating and improving the robustness of language models against adversarial attacks.

2/3
🗞️ [2407.17447] Fluent Student-Teacher Redteaming

3/3



To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
GUYHkNYWcAAi_3h.png

GUYHtH2XsAAUxT5.png

GUYHx3VXcAA8-nD.jpg


Computer Science > Computation and Language​


[Submitted on 24 Jul 2024]

Fluent Student-Teacher Redteaming​


T. Ben Thompson, Michael Sklar (Confirm Labs)

Many publicly available language models have been safety tuned to reduce the likelihood of toxic or liability-inducing text. Users or security analysts attempt to jailbreak or redteam these models with adversarial prompts which cause compliance with requests. One attack method is to apply discrete optimization techniques to the prompt. However, the resulting attack strings are often gibberish text, easily filtered by defenders due to high measured perplexity, and may fail for unseen tasks and/or well-tuned models. In this work, we improve existing algorithms (primarily GCG and BEAST) to develop powerful and fluent attacks on safety-tuned models like Llama-2 and Phi-3. Our technique centers around a new distillation-based approach that encourages the victim model to emulate a toxified finetune, either in terms of output probabilities or internal activations. To encourage human-fluent attacks, we add a multi-model perplexity penalty and a repetition penalty to the objective. We also enhance optimizer strength by allowing token insertions, token swaps, and token deletions and by using longer attack sequences. The resulting process is able to reliably jailbreak the most difficult target models with prompts that appear similar to human-written prompts. On Advbench we achieve attack success rates >93% for Llama-2-7B, Llama-3-8B, and Vicuna-7B, while maintaining model-measured perplexity <33; we achieve 95% attack success for Phi-3, though with higher perplexity. We also find a universally-optimized single fluent prompt that induces >88% compliance on previously unseen tasks across Llama-2-7B, Phi-3-mini and Vicuna-7B and transfers to other black-box models.


Subjects: Computation and Language (cs.CL); Artificial Intelligence (cs.AI)
Cite as:arXiv:2407.17447 [cs.CL]
(or arXiv:2407.17447v1 [cs.CL] for this version)
[2407.17447] Fluent Student-Teacher Redteaming

Submission history​

From: Thomas Thompson [view email]
[v1] Wed, 24 Jul 2024 17:23:18 UTC (1,119 KB)

 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,784
Reputation
7,926
Daps
148,614




Introduction

Qwen2-Audio is the new series of Qwen large audio-language models. Qwen2-Audio is capable of accepting various audio signal inputs and performing audio analysis or direct textual responses with regard to speech instructions. We introduce two distinct audio interaction modes:


  • voice chat: users can freely engage in voice interactions with Qwen2-Audio without text input;

  • audio analysis: users could provide audio and text instructions for analysis during the interaction;

We release Qwen2-Audio-7B and Qwen2-Audio-7B-Instruct, which are pretrained model and chat model respectively.

For more details, please refer to our Blog, GitHub, and Report.



Requirements

The code of Qwen2-Audio has been in the latest Hugging face transformers and we advise you to build from source with command pip install git+GitHub - huggingface/transformers: 🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX., or you might encounter the following error:

KeyError: 'qwen2-audio'



Quickstart

In the following, we demonstrate how to use Qwen2-Audio-7B-Instruct for the inference, supporting both voice chat and audio analysis modes. Note that we have used the ChatML format for dialog, in this demo we show how to leverage apply_chat_template for this purpose.

Voice Chat Inference​


In the voice chat mode, users can freely engage in voice interactions with Qwen2-Audio without text input:
 
Last edited:
Top