Read through the thread and looked at some examples of what it does, and I think a few of the drawbacks of the technology aren't being talked about.
First off, while the technology is definitely impressive in terms of giving coherent answers, it seems like folk ain't noticing how shytty those answers are. They're bland, general, have very little content and a lot of filler. For example, the "what is the impact of Steph Curry?" answer was about a 6th-grader's level of basketball analysis. The "summarize the Meek Mill thread" was general as hell with very little specifics or insight. And as many people have pointed out, when you ask it to solve math or physics problems, a lot of the answers are straight wrong.
Some people are saying, "It will improve", but they're missing an inherent flaw in the technology. The app is built off of the sum of internet dialogue, and summations are always bland as fukk. You read a really good Wikipedia article, and there's a ton of insight because even though it's a non-controversial consensus, it's still written largely by experts, and people relying on experts. If wikipedia articles were based on message boards and blog posts instead, they wouldn't be half as useful.
The issue is especially bad in places where the status quo is simply wrong. Like if you ask, "Was the bombing of Hiroshima necessary?" or "Is modern capitalism a net good?", then it's just going to regurgitate the status quo propaganda, unless the designers explicitly program in their own view (as has happened with race questions). So you are either relying on the conservative historical view on the subject as has been pushed, or you are relying on the benevolence of the ChatGPT creators who have no expertise in said issues.
If people in general begin relying on this as a place to get information or to generate content, it basically is cementing the status quo even stronger than previously.