Hopefully this happens. The faster white supremacy is gotten rid of, the better. Matter fact, where do I sign up for Team AI?
If anything it will reinforce white supremacy because it's learning models are based in white supremacist algorithms.
Hopefully this happens. The faster white supremacy is gotten rid of, the better. Matter fact, where do I sign up for Team AI?
If anything it will reinforce white supremacy because it's learning models are based in white supremacist algorithms.
If anything it will reinforce white supremacy because it's learning models are based in white supremacist algorithms.
Considering most modern devices including cars are attached to the internet and can be controlled remotely, aka the internet of things, I'm not sure why people aren't seeing the dangers of an AI that can shut down your car remotely or drastically increase the temperature in your house or lower it. There's so much a malicious Ai can do to you and once it's let free on the internet and makes copies of itself, the only way to get rid of it is pretty much to EMP ourselves back to the stone age.
The Hegelian dialectic gonna go crazyIt wont give a pass to white supremacy tho... humanity will be a threat including its creator... they gona get it the worse...
Basically, it is the pervasive immorality/amorality of developers that leads us to catastrophe.
After I became a Christian, I began to realize the ethical hole we were in and campaigned hard at my STEM school for certain ethical ideas to be added into the undergraduate curriculum. There's movement in that direction at the university level, very slow movement. But the vast majority of engineers are there to make money as fast as they can and there are no other considerations. If they were more idealistic at some point, that idealism dies the moment they choose to accept a corporate job and assimilate to the environment.
Basically, it is the pervasive immorality/amorality of developers that leads us to catastrophe.
After I became a Christian, I began to realize the ethical hole we were in and campaigned hard at my STEM school for certain ethical ideas to be added into the undergraduate curriculum. There's movement in that direction at the university level, very slow movement. But the vast majority of engineers are there to make money as fast as they can and there are no other considerations. If they were more idealistic at some point, that idealism dies the moment they choose to accept a corporate job and assimilate to the environment.
The fact it's reasoning is great news. No doubt I've been real intrigued by what A.I can do recently and even got placed with overseeing how A.I can be integrated at work this week.
That said, I still think A.I to the point where it can replace what a human can do is still decades away. Straight from the horse's mouth (timestamped):
If you think about it, right now it has access to so much data and can even learn recursively (make up faces and train off those faces for example), yet it still gets some highly basic questions very wrong. It has had plenty of data and opportunity to correct itself and learn from it's mistakes, yet some things are simply still not "clicking" for it. While we've made a ton of progress, we're still not as far as some people think imo.
There's also a downside about reasoning and learning: it means you can learn "bad habits" and reason why it's correct, when it's actually incorrect. You can strengthen bad habits and strong incorrect connections making the breakthroughs more difficult. I have a feeling this will be a big pain point for A.I as well like it is for plenty of people.
I think the truly dangerous part about A.I right now is how much people are blowing it up out of proportion, which is leading to people being fired and losing their jobs over hype and hoopla. We're definitely in the buzzword phase of a cycle and many people in positions of power are overestimating it while many ordinary/laypeople are underestimating it.
Do I think it'll take 30-50 years like said in the video? Probably not and I hope not, I'd love to see it in my lifetime but there's a very real possibility that may not happen.
Computer Science > Artificial Intelligence
[Submitted on 14 Nov 2023]
LLMs cannot find reasoning errors, but can correct them!
Gladys Tyen, Hassan Mansoor, Peter Chen, Tony Mak, Victor Cărbune
While self-correction has shown promise in improving LLM outputs in terms of style and quality (e.g. Chen et al., 2023; Madaan et al., 2023), recent attempts to self-correct logical or reasoning errors often cause correct answers to become incorrect, resulting in worse performances overall (Huang et al., 2023). In this paper, we break down the self-correction process into two core components: mistake finding and output correction. For mistake finding, we release BIG-Bench Mistake, a dataset of logical mistakes in Chain-of-Thought reasoning traces. We provide benchmark numbers for several state-of-the-art LLMs, and demonstrate that LLMs generally struggle with finding logical mistakes. For output correction, we propose a backtracking method which provides large improvements when given information on mistake location. We construe backtracking as a lightweight alternative to reinforcement learning methods, and show that it remains effective with a reward model at 60-70% accuracy.
Subjects: Artificial Intelligence (cs.AI); Computation and Language (cs.CL); Machine Learning (cs.LG) Cite as: arXiv:2311.08516 [cs.AI] (or arXiv:2311.08516v1 [cs.AI] for this version) [2311.08516] LLMs cannot find reasoning errors, but can correct them!
Focus to learn moreSubmission history
From: Gladys Tyen [view email]
[v1] Tue, 14 Nov 2023 20:12:38 UTC (7,191 KB)
GitHub - WHGTyen/BIG-Bench-Mistake: A dataset of LLM-generated chain-of-thought steps annotated with mistake location.
A dataset of LLM-generated chain-of-thought steps annotated with mistake location. - GitHub - WHGTyen/BIG-Bench-Mistake: A dataset of LLM-generated chain-of-thought steps annotated with mistake lo...github.comBIG-Bench-Hard/cot-prompts at main · suzgunmirac/BIG-Bench-Hard
Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Them - suzgunmirac/BIG-Bench-Hardgithub.com
partial AI summary:
ANSWER | PHIND V8 MODEL
The paper you're referring to is about Large Language Models (LLMs) and their ability to self-correct their outputs. The authors, Gladys Tyen, Hassan Mansoor, Peter Chen, Tony Mak, and Victor Carbune, have broken down the self-correction process into two components: mistake finding and output correction.
Here are the key highlights:
- Mistake Finding: The authors have released a dataset called BIG-Bench Mistake, which contains logical mistakes in Chain-of-Thought reasoning traces. They found that LLMs generally struggle with finding these logical mistakes.
- Output Correction: The authors propose a backtracking method for output correction. This method provides large improvements when given information on the mistake location. They view backtracking as a lightweight alternative to reinforcement learning methods and show that it remains effective with a reward model at 60-70% accuracy.
- Benchmark Results: The authors tested the reasoning capabilities of state-of-the-art LLMs using their dataset. They found that LLMs struggle with mistake finding, even for objective, unambiguous cases. This suggests that LLMs' inability to find mistakes is a main contributing factor to their inability to self-correct reasoning errors.
- Backtracking as Verbal Reinforcement Learning: The authors view backtracking as a form of "verbal reinforcement learning". They propose that backtracking can be used with a trained classifier as a reward model, and demonstrate the effectiveness of backtracking at various reward model accuracies.
- BIG-Bench Mistake Dataset: The authors created the BIG-Bench Mistake dataset, which is the first of its kind that goes beyond problems in mathematics. The dataset consists of 2186 sets of CoT-style traces, each annotated with the location of the first logical error. The authors used this dataset to test the reasoning capabilities of state-of-the-art LLMs.
- Annotation: The authors used both human and automatic annotation for the BIG-Bench Mistake dataset. They found that the inter-rater reliability for the human-annotated tasks was high, suggesting the reliability of the dataset.
- Prompting Methods: The authors compared three different prompting methods: Direct trace-level prompting, Direct step-level prompting, and CoT step-level prompting. They found that the accuracy on traces with no mistakes goes down considerably from direct, trace-level prompting to CoT step-level prompting.
In summary, the authors have found that LLMs can correct their outputs using a backtracking method, but they struggle with finding mistakes. They propose backtracking as a lightweight alternative to reinforcement learning methods and show that it remains effective with a reward model at 60-70% accuracy. They also provide a new dataset, BIG-Bench Mistake, for testing the reasoning capabilities of LLMs.
- Few-shot Prompting for Mistake Location: The authors investigated whether their prompting methods can reliably determine the correctness of a trace rather than the mistake location. They found that using mistake finding as a strategy for determining whether an answer is correct or not can be effective.
This is all hype and when the bubble pops it's going to be a glorious bloodbath.