If this AI is all powerful as you seem to believe, why then should anybody bother enrolling in college for..why not pull kids out of high-school and let AI do all the work.
A lot of people should not have been going to college for decades now.
Fast Facts: Most common undergraduate fields of study (37)
The NCES Fast Facts Tool provides quick answers to many education questions (National Center for Education Statistics). Get answers on Early Childhood Education, Elementary and Secondary Education and Higher Education here.nces.ed.gov
Liberal Arts and Business Majors make up the bulk of USELESS college degrees.
But if everyone studied Health and STEM, the wages would go do down.
I think liberal arts majors might benefit greatly from generative AI. I mean we're talking about pairing very power generative technology with some creative minded people, i'm excited to see what creative people like that can/will create with this stuff.
I'd make this distinction.
Liberal Arts =! Artists.
Most folks with Liberal Arts degrees don't explicitly use those degrees for anything.
There's not a big market for people to analyze and critique Toni Morrison.
There might be one guy doing "Hardcore History" but there aren't 300,000.
IMO, Business degrees are is far far worse in terms of "hit the ground running" type skills. But that's a topic for a different post.
There has been considerable divergence of opinion on the reasoning abilities of Large Language Models (LLMs). While the initial optimism that reasoning might emerge automatically with scale has been tempered thanks to a slew of counterexamples, a wide spread belief in their iterative self-critique capabilities persists. In this paper, we set out to systematically investigate the effectiveness of iterative prompting of LLMs in the context of Graph Coloring, a canonical NP-complete reasoning problem that is related to propositional satisfiability as well as practical problems like scheduling and allocation. We present a principled empirical study of the performance of GPT4 in solving graph coloring instances or verifying the correctness of candidate colorings. In iterative modes, we experiment with the model critiquing its own answers and an external correct reasoner verifying proposed solutions. In both cases, we analyze whether the content of the criticisms actually affects bottom line performance. The study seems to indicate that (i) LLMs are bad at solving graph coloring instances (ii) they are no better at verifying a solution--and thus are not effective in iterative modes with LLMs critiquing LLM-generated solutions (iii) the correctness and content of the criticisms--whether by LLMs or external solvers--seems largely irrelevant to the performance of iterative prompting. We show that the observed increase in effectiveness is largely due to the correct solution being fortuitously present in the top-k completions of the prompt (and being recognized as such by an external verifier). Our results thus call into question claims about the self-critiquing capabilities of state of the art LLMs.
Comments: | 18 pages, 3 figures |
Subjects: | Artificial Intelligence (cs.AI) |
Cite as: | arXiv:2310.12397 [cs.AI] |
(or arXiv:2310.12397v1 [cs.AI] for this version) | |
https://doi.org/10.48550/arXiv.2310.12397 Focus to learn more |