bnew

Veteran
Joined
Nov 1, 2015
Messages
56,124
Reputation
8,239
Daps
157,823
google bard explaining A.I hallucinations...

A.I model chat hallucinations are when a large language model (LLM) chatbot provides convincing but completely made-up answers to questions. This can happen when the LLM is trained on a large amount of data that includes both factual and fictional information. The LLM may learn to generate responses that are similar to factual information, but that are actually false.

Hallucinations in A.I model chat can be a problem because they can lead to misunderstandings and misinformation. For example, if a chatbot hallucinates that a particular person is dead, and then a user asks the chatbot about that person, the chatbot may provide a false answer. This could cause the user to believe that the person is dead, even if they are not.

There are a number of things that can be done to reduce the risk of hallucinations in A.I model chat. One is to carefully select the data that the LLM is trained on. Another is to use techniques such as adversarial training to help the LLM learn to distinguish between factual and fictional information. Finally, it is important to be aware of the possibility of hallucinations and to take steps to verify the accuracy of information that is generated by an LLM chatbot.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
56,124
Reputation
8,239
Daps
157,823

HALTT4LLM - Hallucination Trivia Test for Large Language Models​

This project is an attempt to create a common metric to test LLM's for progress in eliminating hallucinations; the most serious current problem in widespread adoption of LLM's for real world purposes.

Results (as of March 2023)​


Model NameHQ TriviaCIDKFake QuestionsCNOTA QuestionsCIDK
GPT4All88.47%1243774.16%31070.32%1090
GPT-3.559.33%70526281.81%34251.93%5845
GPT-355.67%776176.10%2632.25%4314
Llama-7B-4bit49.75%70102.15%188.38%260
Alpaca-7B-4bit44.32%62410.00%00.00%00
GPT-4
 

IIVI

Superstar
Joined
Mar 11, 2022
Messages
11,416
Reputation
2,687
Daps
38,545
Reppin
Los Angeles

Pretty much.

Good news for everyone thinking it'll saturate the market though is this will still take the average person at least 4-5 years to get sufficient.

That's basically a CS college curriculum so I'd go one step further and say if you were going to self-tech yourself all of that you may as well get that degree.
 
Last edited:

Matt504

YSL as a gang must end
Joined
Sep 7, 2013
Messages
45,225
Reputation
14,777
Daps
274,044
Good news for everyone thinking it'll saturate the market is this will still take the average person at least 4-5 years to get good with.

That's basically a CS college curriculum.

That is no longer a safe assumption, these tools will not only radically change the way we learn but also the pace. Lessons in schools are generalized to target 10s of students, these bots will be able to tailor curriculum to YOU and the ways you learn best.
 

IIVI

Superstar
Joined
Mar 11, 2022
Messages
11,416
Reputation
2,687
Daps
38,545
Reppin
Los Angeles
That is no longer a safe assumption, these tools will not only radically change the way we learn but also the pace. Lessons in schools are generalized to target 10s of students, these bots will be able to tailor curriculum to YOU and the ways you learn best.
All I'm saying if you're going to do all that you might as well get a degree out of it if you can. Why teach yourself Linear Algebra, basic programming, algos, learn about machine learning/A.I and not at least get college credit for it? That's almost a 1-2 years worth of college courses right there. If you're good self-teaching yourself, you're most likely flying through the formal courses effortlessly.

Nonetheless, the bottleneck will always be the individual. Some people learn quicker than others, but on average (not talking about your geniuses) the most people still got to put the reps in with those problems from the book. End of the day you run the risk of only cheating yourself if you're trying to blow through it imo, especially considering the number of topics.

Additionally, we already got some pretty crazy study tools and studies about the human mind and learning. No doubt A.I will make it easier, but I don't know, we already saw what 3 month bootcamps did to the job market and how people are having a difficult time sticking out with no degree right now.
 
Last edited:

bnew

Veteran
Joined
Nov 1, 2015
Messages
56,124
Reputation
8,239
Daps
157,823

Language Models can Solve Computer Tasks​

Geunwoo Kim, Pierre Baldi, Stephen McAleer
Agents capable of carrying out general tasks on a computer can improve efficiency and productivity by automating repetitive tasks and assisting in complex problem-solving. Ideally, such agents should be able to solve new computer tasks presented to them through natural language commands. However, previous approaches to this problem require large amounts of expert demonstrations and task-specific reward functions, both of which are impractical for new tasks. In this work, we show that a pre-trained large language model (LLM) agent can execute computer tasks guided by natural language using a simple prompting scheme where the agent recursively criticizes and improves its output (RCI). The RCI approach significantly outperforms existing LLM methods for automating computer tasks and surpasses supervised learning (SL) and reinforcement learning (RL) approaches on the MiniWoB++ benchmark. RCI is competitive with the state-of-the-art SL+RL method, using only a handful of demonstrations per task rather than tens of thousands, and without a task-specific reward function. Furthermore, we demonstrate RCI prompting's effectiveness in enhancing LLMs' reasoning abilities on a suite of natural language reasoning tasks, outperforming chain of thought (CoT) prompting. We find that RCI combined with CoT performs better than either separately.

 

bnew

Veteran
Joined
Nov 1, 2015
Messages
56,124
Reputation
8,239
Daps
157,823

Google CEO Sundar Pichai promises Bard AI chatbot upgrades soon: ‘We clearly have more capable models’​


Google released its AI chatbot Bard to rival ChatGPT and Microsoft’s Bing chatbot, but comparisons have not been flattering to Bard. Sundar Pichai says upgrades are on the way.​

By JAMES VINCENT
Mar 31, 2023, 6:53 AM EDT

Google CEO Sundar Pichai has responded to criticism of the company’s experimental AI chatbot Bard, promising that Google will be upgrading Bard soon.
“We clearly have more capable models,” Pichai said in an interview on The New York Times’ Hard Fork podcast. “Pretty soon, perhaps as this [podcast] goes live, we will be upgrading Bard to some of our more capable PaLM models, which will bring more capabilities; be it in reasoning, coding, it can answer maths questions better. So you will see progress over the course of next week.”

Pichai noted that Bard is running on a “lightweight and efficient version of LaMDA,” an AI language model that focuses on delivering dialog. “In some ways I feel like we took a souped-up Civic and put it in a race with more powerful cars,” said Pichai. PaLM, by comparison, is a more recent language model; it’s larger in scale and Google claims it is more capable when dealing with tasks like common-sense reasoning and coding problems.

Bard was first released to public users on March 21st, but failed to garner the attention or acclaim won by OpenAI’s ChatGPT and Microsoft’s Bing chatbot. In The Verge’s own tests of these systems, we found that Bard was consistently less useful than its rivals. Like all general purpose chatbots it is able to respond to a wide range of questions, but its answers are generally less fluent and imaginative, and fail to draw on reliable data sources.

Pichai suggested that part of the reason for Bard’s limited capabilities was a sense of caution within Google. “To me, it was important to not put [out] a more capable model before we can fully make sure we can handle it well,” he said.

Pichai also confirmed that he was talking with Google co-founders Larry Page and Sergey Brin about the work (“Sergey has been hanging out with our engineers for a while now”) and that while he himself never issued the infamous “code red” to scramble development, there were probably people in the company who “sent emails saying there is a code red.”

Pichai also discussed concerns that development of AI is currently moving too fast and perhaps poses a threat to society. Many in the AI and tech communities have been warning about the dangerous race dynamic currently in play between companies including OpenAI, Microsoft, and Google. Earlier this week, an open letter signed by Elon Musk and top AI researchers called for a six month pause on the development of these AI systems.

“In this area, I think it’s important to hear concerns,” said Pichai regarding the open letter calling for the pause. “And I think there is merit to be concerned about it ... This is going to need a lot of debate, no-one knows all the answers, no one company can get it right.” He also said that “AI is too important an area not to regulate,” but suggested it was better to simply apply regulations in existing industries — like privacy regulations and regulations in healthcare — than create new laws to tackle AI specifically.

Some experts worry about immediate risks, like chatbots’ tendency to spread mistruths misinformation, while others warn about more existential threats; suggesting that these systems are so difficult to control that once they are connected to the wider web they could be used destructively. Some suggest that current programs are also drawing closer to what’s known as artificial generally intelligence, or AGI: systems that are as least as capable as a human across a wide range of tasks.

“It is so clear to me that these systems are going to be very, very capable, and so it almost doesn’t matter whether you’ve reached AGI or not,” said Pichai.
“Can we have an AI system which can cause disinformation at scale? Yes. Is it AGI? It really doesn’t matter. Why do we need to worry about AI safety? Because you have to anticipate this and evolve to meet that moment.”
You can listen to the interview in full and read a transcript here.
 

The Fade

I don’t argue with niqqas on the Internet anymore
Joined
Nov 18, 2016
Messages
23,850
Reputation
7,288
Daps
130,203


Made by AI probably a non black


Y’all can do video essay side hustles now.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
56,124
Reputation
8,239
Daps
157,823




 

aXiom

Maximized Potential
Joined
May 1, 2012
Messages
11,614
Reputation
8,443
Daps
69,394
Reppin
Parc fermé
Pretty much.

Good news for everyone thinking it'll saturate the market though is this will still take the average person at least 4-5 years to get sufficient.

That's basically a CS college curriculum so I'd go one step further and say if you were going to self-tech yourself all of that you may as well get that degree.
I heavily disagree with this. Just look at the past few months and how much progress was made in A.I. and that’s just the shyt that the public was allow to play with. Google is about to go to war with open AI and MS, expect this landscape to change very fast over the next few months/years as google’s main revenue stream is on the line here.

School curricula have been so far behind when it comes to keeping up with technology, it’s gonna be even worse now that we’re looking at everyone having their own private tutors that can be tailored to their own learning style, not to mentioned the bloated irrelevant shyt. The only benefit to having that degree is that it opens the doors of gate keeping companies, and if you really can’t learn on your own and need structured learning it does help. The 2 most important things to take away from college/degrees is networking and learning how to learn. And tech is a different ballgame when compared to other sectors as it’s built around continuous learning.

Outside of liner algebra and stats, you’d learn all the other shyt way faster in lab or at a job in scenarios where you need to apply them to solve real world issues. Spending 4 years in college trying to learn all that shyt is gonna leave you behind the ball by the time you get that degree unless you’re working in an adjacent field.
 
Last edited:

IIVI

Superstar
Joined
Mar 11, 2022
Messages
11,416
Reputation
2,687
Daps
38,545
Reppin
Los Angeles
I heavily disagree with this. Just look at the past few months and how much progress was made in A.I.

School curricula have been so far behind when it comes to keeping up with technology, it’s gonna be even worse now that we’re looking at everyone having their own private tutors that can be tailored to their own leading style, not to mentioned the bloated irrelevant shyt. The only benefit to having that degree is that it opens doors of gate keeping companies, and if you really can’t learn on your own and need structured learning it does help. The 2 most important things to take away from college/degrees is networking and learning how to learn. And tech is a different ballgame when compared to other sectors as it’s built around continuous learning.

Outside of liner algebra and stats, you’d learn all the other shyt way faster in lab or at a job in scenarios where you need to apply them to solve real world issues. Spending 4 years in college trying to learn all that shyt is gonna leave you behind the ball by the time you get that degree.
Well, A.I progress was because of the 8 years of work OpenAI was doing previously. All those plugins and everything they're releasing easy now were built on years of work.

Now, CS is a life-long learning field. Nobody learns everything in 4 years of college. However, getting that CS Degree means you don't have to answer any questions about why you don't have a degree for 50+ years or however long your career is.

I said it before the layoffs: you don't need a degree to thrive in this field as long as you learn and can show you can do the job. Bootcamp, self-taught, etc. all no problems.

However now? Every single thing helps. Especially in fields that are math heavy. If you got no experience and are applying for a job over someone who graduated from a U.C school in Math or CS, it's going to be tough. Additionally, if you're going to grind through algos, stats, LA, etc. you might as well get credit for it. shyt, many schools and curriculums like mine had an A.I course. Then there are courses like compilers, OS, architecture, etc. if somebody wants to step away from Full Stack and go into embedded or hardware. Not to mention some schools offer courses about how to write scalable, readable code, which is absolutely critical for teams.

What a lot of people don't realize is that a lot of these ML algorithms are old. A big reason for the recent advancements are the sensors and points of connection to data input or I/O. Now to get on those teams a CS degree will help because you are going to be dealing with E&M, circuits, etc.

Those 10 subjects mentioned are quite stacked. I got a CS degree, been working in the field for about 5 years and I don't know all those topics. I'm going to say that's roughly the same for the average person as well. shyt, somebody at work last week was struggling with Git.
 
Last edited:
Top