Y'all heard about ChatGPT yet? AI instantly generates question answers, entire essays etc.

newarkhiphop

Moderator
Staff member
Supporter
Joined
Apr 30, 2012
Messages
37,453
Reputation
9,892
Daps
123,170
FYI you can create tables in it too, give it column name and row names.

it will also insert info in here for you

"create a table with 3 columns named hair color, height and weight. This table also has five rows. In the rows insert the name of the 5 hottest pawgs in the world"
 

krackdagawd

Inspire.
Joined
May 1, 2012
Messages
45,644
Reputation
8,181
Daps
139,432
Reppin
Another Gold Medal
FYI you can create tables in it too, give it column name and row names.

it will also insert info in here for you

"create a table with 3 columns named hair color, height and weight. This table also has five rows. In the rows insert the name of the 5 hottest pawgs in the world"

Ayo look at what nikkas using this for :laff:
 

TheGreatMTB

All Star
Joined
Apr 19, 2017
Messages
1,418
Reputation
483
Daps
8,394
The only occupation that'll be safe in a decade is being a fine ass woman/golddigger....

and the vtubers are coming for that
 

Professor Emeritus

Veteran
Poster of the Year
Supporter
Joined
Jan 5, 2015
Messages
51,330
Reputation
19,656
Daps
203,839
Reppin
the ether
Read through the thread and looked at some examples of what it does, and I think a few of the drawbacks of the technology aren't being talked about.

First off, while the technology is definitely impressive in terms of giving coherent answers, it seems like folk ain't noticing how shytty those answers are. They're bland, general, have very little content and a lot of filler. For example, the "what is the impact of Steph Curry?" answer was about a 6th-grader's level of basketball analysis. The "summarize the Meek Mill thread" was general as hell with very little specifics or insight. And as many people have pointed out, when you ask it to solve math or physics problems, a lot of the answers are straight wrong.

Some people are saying, "It will improve", but they're missing an inherent flaw in the technology. The app is built off of the sum of internet dialogue, and summations are always bland as fukk. You read a really good Wikipedia article, and there's a ton of insight because even though it's a non-controversial consensus, it's still written largely by experts, and people relying on experts. If wikipedia articles were based on message boards and blog posts instead, they wouldn't be half as useful.

The issue is especially bad in places where the status quo is simply wrong. Like if you ask, "Was the bombing of Hiroshima necessary?" or "Is modern capitalism a net good?", then it's just going to regurgitate the status quo propaganda, unless the designers explicitly program in their own view (as has happened with race questions). So you are either relying on the conservative historical view on the subject as has been pushed, or you are relying on the benevolence of the ChatGPT creators who have no expertise in said issues.

If people in general begin relying on this as a place to get information or to generate content, it basically is cementing the status quo even stronger than previously.
 

Matt504

YSL as a gang must end
Joined
Sep 7, 2013
Messages
45,218
Reputation
14,767
Daps
274,002
Read through the thread and looked at some examples of what it does, and I think a few of the drawbacks of the technology aren't being talked about.

First off, while the technology is definitely impressive in terms of giving coherent answers, it seems like folk ain't noticing how shytty those answers are. They're bland, general, have very little content and a lot of filler. For example, the "what is the impact of Steph Curry?" answer was about a 6th-grader's level of basketball analysis. The "summarize the Meek Mill thread" was general as hell with very little specifics or insight. And as many people have pointed out, when you ask it to solve math or physics problems, a lot of the answers are straight wrong.

Some people are saying, "It will improve", but they're missing an inherent flaw in the technology. The app is built off of the sum of internet dialogue, and summations are always bland as fukk. You read a really good Wikipedia article, and there's a ton of insight because even though it's a non-controversial consensus, it's still written largely by experts, and people relying on experts. If wikipedia articles were based on message boards and blog posts instead, they wouldn't be half as useful.

The issue is especially bad in places where the status quo is simply wrong. Like if you ask, "Was the bombing of Hiroshima necessary?" or "Is modern capitalism a net good?", then it's just going to regurgitate the status quo propaganda, unless the designers explicitly program in their own view (as has happened with race questions). So you are either relying on the conservative historical view on the subject as has been pushed, or you are relying on the benevolence of the ChatGPT creators who have no expertise in said issues.

If people in general begin relying on this as a place to get information or to generate content, it basically is cementing the status quo even stronger than previously.

Not inserting my opinion here but rather ChatGpt's answers to the prompts you used as examples.

Is modern capitalism a net good?
Modern capitalism, like any system, has its advantages and disadvantages. It has been responsible for significant economic growth and increased prosperity in many countries, but it also has its drawbacks, such as income inequality and a tendency towards monopolization. Ultimately, whether modern capitalism is a net good or not is a matter of perspective and will depend on one's values and beliefs.

Was the bombing of Hiroshima necessary?
The decision to drop atomic bombs on the Japanese cities of Hiroshima and Nagasaki in August 1945 remains a controversial and heavily debated topic. Those who argue that the bombings were necessary point to the high number of casualties that would have occurred if the United States had invaded Japan, and the belief that the bombings brought an end to the war and saved lives. Critics argue that Japan was already defeated and that the bombings were unnecessary acts of aggression that resulted in the deaths of over 200,000 civilians, most of them were women and children. The bombings also caused a long-term human suffering and environmental damage.
 

MikelArteta

Moderator
Staff member
Supporter
Joined
Apr 30, 2012
Messages
249,351
Reputation
30,853
Daps
762,572
Reppin
Top 4
Read through the thread and looked at some examples of what it does, and I think a few of the drawbacks of the technology aren't being talked about.

First off, while the technology is definitely impressive in terms of giving coherent answers, it seems like folk ain't noticing how shytty those answers are. They're bland, general, have very little content and a lot of filler. For example, the "what is the impact of Steph Curry?" answer was about a 6th-grader's level of basketball analysis. The "summarize the Meek Mill thread" was general as hell with very little specifics or insight. And as many people have pointed out, when you ask it to solve math or physics problems, a lot of the answers are straight wrong.

Some people are saying, "It will improve", but they're missing an inherent flaw in the technology. The app is built off of the sum of internet dialogue, and summations are always bland as fukk. You read a really good Wikipedia article, and there's a ton of insight because even though it's a non-controversial consensus, it's still written largely by experts, and people relying on experts. If wikipedia articles were based on message boards and blog posts instead, they wouldn't be half as useful.

The issue is especially bad in places where the status quo is simply wrong. Like if you ask, "Was the bombing of Hiroshima necessary?" or "Is modern capitalism a net good?", then it's just going to regurgitate the status quo propaganda, unless the designers explicitly program in their own view (as has happened with race questions). So you are either relying on the conservative historical view on the subject as has been pushed, or you are relying on the benevolence of the ChatGPT creators who have no expertise in said issues.

If people in general begin relying on this as a place to get information or to generate content, it basically is cementing the status quo even stronger than previously.

I use it more to expand

Example I write 100 words and tell chat gpt to refine it and add another 200 words expanding on what I wrote at a professional level
 

L@CaT

For Hire Gunslinger!
Joined
May 11, 2012
Messages
15,589
Reputation
-745
Daps
27,044
Not inserting my opinion here but rather ChatGpt's answers to the prompts you used as examples.

Is modern capitalism a net good?


Was the bombing of Hiroshima necessary?
Chat GPT is as great as the User can make it be. If you're not getting the answers your looking for, you're probably not asking it the right questions.
 

Matt504

YSL as a gang must end
Joined
Sep 7, 2013
Messages
45,218
Reputation
14,767
Daps
274,002
Chat GPT is as great as the User can make it be. If you're not getting the answers your looking for, you're probably not asking it the right questions.

This is my position on it. I've been using it for just over a week for work and I'd 100% pay for a subscription if there's even a 95% uptime.

:wow:
 

Professor Emeritus

Veteran
Poster of the Year
Supporter
Joined
Jan 5, 2015
Messages
51,330
Reputation
19,656
Daps
203,839
Reppin
the ether
Not inserting my opinion here but rather ChatGpt's answers to the prompts you used as examples.

Is modern capitalism a net good?


Was the bombing of Hiroshima necessary?

So it's the equivilent of the Steph Curry answer - something a 6th grader could have said after their teacher gave a wishy-washy answer trying not to say anything controversial. :unimpressed:




Chat GPT is as great as the User can make it be. If you're not getting the answers your looking for, you're probably not asking it the right questions.

I'm willing to be convinced, but I just read through a 35-page thread without coming across a single answer or essay that gave the impression that an expert wrote it. Both the Coli examples and the quoted twitter examples were bland as fukk, a bunch of fluff built around 1-2 obvious main points. It seems to be able to produce adequate filler copy, but I'm not seeing insight.
 

Professor Emeritus

Veteran
Poster of the Year
Supporter
Joined
Jan 5, 2015
Messages
51,330
Reputation
19,656
Daps
203,839
Reppin
the ether
I use it more to expand

Example I write 100 words and tell chat gpt to refine it and add another 200 words expanding on what I wrote at a professional level

I mean, that does seem like its best use so far - it produces filler.

I can see it being used a lot by non-writers who need to produce words but don't want to waste their time writing. It can basically boost almost anyone up to "competent" stage, or can take a competent writer and give them what they would have produced anyway but in a lot less time. However, it doesn't seem to be producing any good writing.

Does that mean people who use ChatGPT to produce competent writing are now never going to put in the work to become legitimately good writers?

That leads into the second potentially negative side effect I worry about it. Right now, it's being used as a time-saving mechanism by competent professionals who have already trained themselves to a certain point. I assume you're using it to produce the sort of things you've already had to write numerous times in the past - so you know how to write them, you know what you're looking for, and after you apply ChatGPT to them, you can evaluate the result and tweek it if it's off a bit.

That's great if all you ever need is the sort of boilerplate competency to fill out a report. But it feels like once you start using ChatGPT, that's all you'll top out at. The process of using ChatGPT means both that you'll put substantially less thought into the work and you give yourself an automatic ceiling for how good it is. Maybe that's all you'll ever want - but in the real world without ChatGPT, certain people would have kept advancing, kept producing higher and higher quality work as a result of the hard work they'd put in every day. Seems like these shortcuts would have an effect of cutting off that mental work early so the actual progress in ability never manifests itself.

And even worse, what happens in the future when people raise themselves up on this shyt before they actually gained the skills? What if you started using it back when you were in middle school, to complete answers for you at the time when you were supposed to be building the neural connections and gaining the mental skills to be able to do work like this on your own. Would you ever even have gotten good enough at producing those reports to be able to evaluate whether ChatGPT had done it right? Or would you be forced to assume it was right, just because ChatGPT had done it? Perhaps you would still get feedback from time to time, telling you whether or not your ChatGPT-produced answer had hit the mark. But my experience in educational psychology suggests that we are far less able to evaluate and utilize feedback for shyt we didn't build ourselves.

There's a chance that this will curtail a lot of the development that makes us into strong thinkers. There's a reason human beings don't mature mentally until around 20-25 years old. We can't build wet neural networks with shortcuts. Perhaps for a few folk this will be a time-saving device that will allow them to spend more time in deeply engaging endeavors, but I'm guessing for a lot more young people it's going to lead to less time spent in intellectually demanding activity, and that's not good in the long run.
 

Professor Emeritus

Veteran
Poster of the Year
Supporter
Joined
Jan 5, 2015
Messages
51,330
Reputation
19,656
Daps
203,839
Reppin
the ether
Just realized Russian troll farms and Nigerian scammers are going to fukking eat off of this shyt. Fake news sites too. They'll be able to just straight make shyt up at record pace, tell ChatGPT to tailor it for whatever audience.

Fake-ass motivational speakers, pyramid schemes, get-rich-quick schemers, self-help book writers too. This is pretty much exactly at their writing level, but they'll be able to produce content much more quickly and tailor it more effectively than before.

Pretty much anything where the style is more important than the truth or depth of the substance, and where volume matters, is going to be on easy mode now.
 
Top