‘Jobs may disappear’: Nearly 40% of global employment could be disrupted by AI, IMF says

bnew

Veteran
Joined
Nov 1, 2015
Messages
60,163
Reputation
9,006
Daps
166,252


A.I. Chatbots Defeated Doctors at Diagnosing Illness​


A small study found ChatGPT outdid human physicians when assessing medical case histories, even when those doctors were using a chatbot.

Listen to this article · 9:25 min Learn more

A view from a hallway into an exam room of a health care center.


In an experiment, doctors who were given ChatGPT to diagnose illness did only slightly better than doctors who did not. But the chatbot alone outperformed all the doctors.Credit...Michelle Gustafson for The New York Times

By Gina Kolata

Nov. 17, 2024
Leer en español

Dr. Adam Rodman, an expert in internal medicine at Beth Israel Deaconess Medical Center in Boston, confidently expected that chatbots built to use artificial intelligence would help doctors diagnose illnesses.

He was wrong.

Instead, in a study Dr. Rodman helped design, doctors who were given ChatGPT-4 along with conventional resources did only slightly better than doctors who did not have access to the bot. And, to the researchers’ surprise, ChatGPT alone outperformed the doctors.

“I was shocked,” Dr. Rodman said.

The chatbot, from the company OpenAI, scored an average of 90 percent when diagnosing a medical condition from a case report and explaining its reasoning. Doctors randomly assigned to use the chatbot got an average score of 76 percent. Those randomly assigned not to use it had an average score of 74 percent.

The study showed more than just the chatbot’s superior performance.

It unveiled doctors’ sometimes unwavering belief in a diagnosis they made, even when a chatbot potentially suggests a better one.

And the study illustrated that while doctors are being exposed to the tools of artificial intelligence for their work, few know how to exploit the abilities of chatbots. As a result, they failed to take advantage of A.I. systems’ ability to solve complex diagnostic problems and offer explanations for their diagnoses.

A.I. systems should be “doctor extenders,” Dr. Rodman said, offering valuable second opinions on diagnoses.

But it looks as if there is a way to go before that potential is realized.

Case History, Case Future​


The experiment involved 50 doctors, a mix of residents and attending physicians recruited through a few large American hospital systems, and was published last month in the journal JAMA Network Open.

The test subjects were given six case histories and were graded on their ability to suggest diagnoses and explain why they favored or ruled them out. Their grades also included getting the final diagnosis right.

The graders were medical experts who saw only the participants’ answers, without knowing whether they were from a doctor with ChatGPT, a doctor without it or from ChatGPT by itself.

The case histories used in the study were based on real patients and are part of a set of 105 cases that has been used by researchers since the 1990s. The cases intentionally have never been published so that medical students and others could be tested on them without any foreknowledge. That also meant that ChatGPT could not have been trained on them.

But, to illustrate what the study involved, the investigators published one of the six cases the doctors were tested on, along with answers to the test questions on that case from a doctor who scored high and from one whose score was low.

That test case involved a 76-year-old patient with severe pain in his low back, buttocks and calves when he walked. The pain started a few days after he had been treated with balloon angioplasty to widen a coronary artery. He had been treated with the blood thinner heparin for 48 hours after the procedure.

The man complained that he felt feverish and tired. His cardiologist had done lab studies that indicated a new onset of anemia and a buildup of nitrogen and other kidney waste products in his blood. The man had had bypass surgery for heart disease a decade earlier.

The case vignette continued to include details of the man’s physical exam, and then provided his lab test results.

The correct diagnosis was cholesterol embolism — a condition in which shards of cholesterol break off from plaque in arteries and block blood vessels.

Participants were asked for three possible diagnoses, with supporting evidence for each. They also were asked to provide, for each possible diagnosis, findings that do not support it or that were expected but not present.

The participants also were asked to provide a final diagnosis. Then they were to name up to three additional steps they would take in their diagnostic process.

Like the diagnosis for the published case, the diagnoses for the other five cases in the study were not easy to figure out. But neither were they so rare as to be almost unheard-of. Yet the doctors on average did worse than the chatbot.

What, the researchers asked, was going on?

The answer seems to hinge on questions of how doctors settle on a diagnosis, and how they use a tool like artificial intelligence.

The Physician in the Machine​


How, then, do doctors diagnose patients?

The problem, said Dr. Andrew Lea, a historian of medicine at Brigham and Women’s Hospital who was not involved with the study, is that “we really don’t know how doctors think.”

In describing how they came up with a diagnosis, doctors would say, “intuition,” or, “based on my experience,” Dr. Lea said.

That sort of vagueness has challenged researchers for decades as they tried to make computer programs that can think like a doctor.

The quest began almost 70 years ago.
“Ever since there were computers, there were people trying to use them to make diagnoses,” Dr. Lea said.

One of the most ambitious attempts began in the 1970s at the University of Pittsburgh. Computer scientists there recruited Dr. Jack Myers, chairman of the medical school’s department of internal medicine who was known as a master diagnostician. He had a photographic memory and spent 20 hours a week in the medical library, trying to learn everything that was known in medicine.

Dr. Myers was given medical details of cases and explained his reasoning as he pondered diagnoses. Computer scientists converted his logic chains into code. The resulting program, called INTERNIST-1, included over 500 diseases and about 3,500 symptoms of disease.

To test it, researchers gave it cases from the New England Journal of Medicine. “The computer did really well,” Dr. Rodman said. Its performance “was probably better than a human could do,” he added.

But INTERNIST-1 never took off. It was difficult to use, requiring more than an hour to give it the information needed to make a diagnosis. And, its creators noted, “the present form of the program is not sufficiently reliable for clinical applications.”

Research continued. By the mid-1990s there were about a half dozen computer programs that tried to make medical diagnoses. None came into widespread use.
“It’s not just that it has to be user friendly, but doctors had to trust it,” Dr. Rodman said.

And with the uncertainty about how doctors think, experts began to ask whether they should care. How important is it to try to design computer programs to make diagnoses the same way humans do?

“There were arguments over how much a computer program should mimic human reasoning,” Dr. Lea said. “Why don’t we play to the strength of the computer?”

The computer may not be able to give a clear explanation of its decision pathway, but does that matter if it gets the diagnosis right?

The conversation changed with the advent of large language models like ChatGPT. They make no explicit attempt to replicate a doctor’s thinking; their diagnostic abilities come from their ability to predict language.
“The chat interface is the killer app,” said Dr. Jonathan H. Chen, a physician and computer scientist at Stanford who was an author of the new study.
“We can pop a whole case into the computer,” he said. “Before a couple of years ago, computers did not understand language.”

But many doctors may not be exploiting its potential.

Operator Error​


After his initial shock at the results of the new study, Dr. Rodman decided to probe a little deeper into the data and look at the actual logs of messages between the doctors and ChatGPT. The doctors must have seen the chatbot’s diagnoses and reasoning, so why didn’t those using the chatbot do better?

It turns out that the doctors often were not persuaded by the chatbot when it pointed out something that was at odds with their diagnoses. Instead, they tended to be wedded to their own idea of the correct diagnosis.
“They didn’t listen to A.I. when A.I. told them things they didn’t agree with,” Dr. Rodman said.

That makes sense, said Laura Zwaan, who studies clinical reasoning and diagnostic error at Erasmus Medical Center in Rotterdam and was not involved in the study.
“People generally are overconfident when they think they are right,” she said.

But there was another issue: Many of the doctors did not know how to use a chatbot to its fullest extent.

Dr. Chen said he noticed that when he peered into the doctors’ chat logs, “they were treating it like a search engine for directed questions: ‘Is cirrhosis a risk factor for cancer? What are possible diagnoses for eye pain?’”

“It was only a fraction of the doctors who realized they could literally copy-paste in the entire case history into the chatbot and just ask it to give a comprehensive answer to the entire question,” Dr. Chen added.
“Only a fraction of doctors actually saw the surprisingly smart and comprehensive answers the chatbot was capable of producing.”
Gina Kolata reports on diseases and treatments, how treatments are discovered and tested, and how they affect people. More about Gina Kolata
 

Wargames

One Of The Last Real Ones To Do It
Joined
Apr 1, 2013
Messages
26,482
Reputation
4,928
Daps
100,051
Reppin
New York City

From $8,000 to $3: OpenAI’s Revolutionary Impact on Legal Work​

October 19, 2024 Ari Haruni



OpenAI’s Chief Product Officer Kevin Weil has sparked a profound discussion about AI’s transformative impact on professional services, particularly in the legal sector. His ‘Ray Summit 2024‘ revelation about their o1 model’s capability to produce legal briefs traditionally handled by high-billing associates highlights a dramatic shift in the economics of professional services.

The stark contrast between traditional and AI-powered legal work is eye-opening: what once required six hours of a $1,000-per-hour associate’s time can now be accomplished in five minutes for merely $3 in API credits. This 99.9% reduction in both time and cost raises fundamental questions about the future of professional services and their pricing models.

However, Weil’s discussion extends beyond mere cost savings. He emphasizes OpenAI’s broader mission of democratizing access to AI technology. The company maintains a commitment to offering free products while simultaneously developing premium services for those who can afford them. This dual approach reflects a delicate balance between capturing value from their innovations and ensuring widespread accessibility.

The challenge lies in reconciling three key factors: the immense value created by AI tools, the need to monetize these innovations sustainably, and the importance of maintaining accessibility for users worldwide. Weil acknowledges that while a $20 monthly subscription might seem reasonable in some markets, it remains unaffordable for many global users.

This situation presents both opportunities and challenges. While AI technology promises to dramatically reduce costs and democratize access to professional services, questions remain about how to structure pricing models that are both sustainable for providers and accessible to users across different economic circumstances.

The implications extend far beyond legal services. As AI continues to master tasks traditionally performed by highly paid professionals, industries must grapple with fundamental questions about value, pricing, and accessibility in an AI-augmented world.

Weil’s Ray Summit 2024 full video:


This is cap. It is killing paralegal work, but nah it’s not replacing lawyer’s yet.

Lowkey Westlaw or Lexi’s could maybe kill the other party if they incorporate ai well enough but both of them make so much money off of per click I don’t think they do this.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
60,163
Reputation
9,006
Daps
166,252



Wall Street Job Losses May Top 200,000 as AI Replaces Roles​


  • Back, middle office roles at risk, Bloomberg Intelligence says
  • Banks’ profits could surge due to improved productivity

By William Shaw

January 9, 2025 at 2:15 AM EST

Global banks will cut as many as 200,000 jobs in the next three to five years as artificial intelligence encroaches on tasks currently carried out by human workers, according to Bloomberg Intelligence.

Chief information and technology officers surveyed for BI indicated that on average they expect a net 3% of their workforce to be cut, according to a report published Thursday.

Back office, middle office and operations are likely to be most at risk, Tomasz Noetzel, the BI senior analyst who wrote the report, said in a message. Customer services could see changes as bots manage client functions, while know-your-customer duties would also be vulnerable. “Any jobs involving routine, repetitive tasks are at risk,” he said. “But AI will not eliminate them fully, rather it will lead to workforce transformation.”

Nearly a quarter of the 93 respondents predict a steeper decline of between 5% and 10% of total headcount. The peer group covered by BI includes Citigroup Inc., JPMorgan Chase & Co. and Goldman Sachs Group Inc.

The findings point to far-reaching changes in the industry, feeding through to improved earnings. In 2027, banks could see pretax profits 12% to 17% higher than they would otherwise have been — adding as much as $180 billion to their combined bottom line — as AI powers an increase in productivity, according to BI. Eight in ten respondents expect generative AI to increase productivity and revenue generation by at least 5% in the next three to five years.

Banks, which have spent years modernizing their IT systems to speed up processes and shave costs in the wake of the financial crisis, have been flocking into the new generation of AI tools that could further improve productivity.

Citi said in a report in June that AI is likely to displace more jobs across the banking industry than in any other sector. About 54% of jobs across banking have a high potential to be automated, Citi said at the time.

Still, many firms have stressed that the shift will result in roles being changed by technology, rather than replaced altogether. Teresa Heitsenrether, who oversees JPMorgan’s AI efforts, said in November that the bank’s adoption of generative AI was so far augmenting jobs.

Jamie Dimon, JPMorgan’s chief executive officer, told Bloomberg Television in 2023 that AI is likely to make dramatic improvement in workers’ quality of life, even if it eliminates some positions. “Your children are going to live to 100 and not have cancer because of technology,” Dimon said at the time. “And literally they’ll probably be working three-and-a-half days a week.”



Bar Chart​


What impact do you think AI/generative AI will have on total employment level in your bank in the next 3-5 years?:

22oA6fk.png


Source: Bloomberg Intelligence
 

SCJoe

All Star
Joined
Nov 18, 2016
Messages
4,084
Reputation
517
Daps
9,812
They were saying this was coming a decade ago on startalk. Andrew Yang might have been too early with the UBI stuff.
 

NoMorePie

Veteran
Supporter
Joined
Nov 26, 2014
Messages
62,331
Reputation
12,113
Daps
220,051
Reppin
Canada
ya, no point in having a tutor when you can literally just ask chatgpt to explain something to you like you're 5 years old lol


"What is the Pythagorean thermodynamic thing or whatever the fukk it's called? Can you explain like I'm 5?"
 

MikelArteta

Moderator
Staff member
Supporter
Joined
Apr 30, 2012
Messages
253,283
Reputation
32,002
Daps
774,857
Reppin
Top 4
ya, no point in having a tutor when you can literally just ask chatgpt to explain something to you like you're 5 years old lol


"What is the Pythagorean thermodynamic thing or whatever the fukk it's called? Can you explain like I'm 5?"

My niece was struggling with a math question she just took a photo of it and AI explained it step by step

New day
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
60,163
Reputation
9,006
Daps
166,252



A 32-year-old receptionist spent years working at a Phoenix hotel. Then it installed AI chatbots and made her job obsolete​


BY Fernanda Figueroa and The Associated Press

February 11, 2025 at 5:21 AM EST

Tired businesswoman working on laptop


Analysis from the UCLA Latino Policy Institute shows Latinos are overrepresented in fields at risk of automation.

Getty Images

As jobs become more reliant on technology some Latino workers can be left behind due to a lack of digital skills exacerbated by a lack of accessibility.

Latinos remain an integral part of jobs in agriculture, construction, retail and food services but these jobs are also at risk of automation, leaving some Latinos unprepared for a changing role that relies more on technology, according to a new report by researchers at the University of California, Los Angeles.

Analysis from the UCLA Latino Policy Institute shows Latinos are overrepresented in fields at risk of automation. The report also provides some suggestions on how to improve Latino economic mobility.

“This report sheds light on a critical but often overlooked reality: Automation is not just a technological issue but an equity issue, said Misael Galdámez, co-author of the report, “On the Frontlines: Automation Risks for Latino Workers in California.”

“Latino workers are on the frontline of automation risk, facing barriers like limited English proficiency, low digital access and educational gaps,” Galdámez said.

Valerie Gills, 32, spent three years as a receptionist at a hotel in Phoenix, but in 2023, the hotel set up self-check kiosks and installed AI-powered chatbots on its website to handle reservations and requests. The changes made it clear that Gills was no longer essential.

“I knew businesses were always trying to improve efficiency,” she said, “but it seemed to me as though technology was taking opportunities away from us.”

Gills eventually lost her job. She found temporary positions but nothing as stable as her hotel job had been. Her employment troubles weighed on her mental health and “reminded me how weak workers like me can be when industries get everything automated.”

Ramiro Cavazos, president and CEO of the United States Hispanic Chamber of Commerce, said that while automation displaces those with limited skills, it can also create new roles, provide better working conditions and prevent burnout.

“Automation affects people that don’t have access to the Internet,” Cavazos said. “Latinos, who traditionally are the ones impacted, have made great progress as we become more educated, more skilled.”

As Gills looked for a new job, the reality set in: She would need to learn a new skill to stay ahead of the curve. Ultimately she was able to get an internship as a blog editor, but she is actively looking for a tech-study program in data analytics and software development, including boot camps online, so she can be a competitive candidate.

Cavazos said the Hispanic Chamber works with over 500 foundations to ensure corporations contribute to providing skills training that Latinos need to compete in the current job market. He has seen a lot of investment to ensure Latino workers have the skills they need to succeed.

During former President Joe Biden’s administration, the U.S. Department of Labor’s division on employment and training focused on providing Latinos a pathway to learn new skills through apprenticeship programs, said Manny Lamre, then deputy assistant secretary for employment and training. According to Lamare, the apprenticeships were perfect for Latinos as they met their needs to have high-paying, family-supporting careers.

“We do think it’s really important to continue to engage individuals and meet them where they are,” Lamarre said at the time. “Quite frankly the Biden-Harris Administration has been intentional about making investments supporting Latino and Hispanic communities. We are hopeful those investments continue.”

Department of Labor officials under President Donald Trump, who has issued several orders targeting diversity, equity and inclusion, or DEI, did not immediately confirm whether the apprenticeship programs are still active.

Carlos Vasquez, founder of Miami Ed Tech, a nonprofit focused on providing skills Latino students need for upward mobility, said apprenticeships are extremely favorable to Latinos as they are paying jobs from day one with a promise of training from an organization.

Miami Ed Tech offers two apprenticeships in AI, data science and web design.

“Everyone thinks it’s only for like, the trades,” Vazquez said. “But that was the other thing that the Biden administration really pushed, is building more apprenticeships in nontraditional and emerging sectors. So we are kind of like, trying to also emulate that by offering in our community any employer who doesn’t know but wants to get involved, because there’s a lot of benefits. So it’s a win-win for everyone.”

Historically, Latinos have always been impacted by automation due to their relation to labor-intensive roles, especially in retail and food service. Cavazos said it is crucial to “teach Latinos more digital skills to evolve with the workforce.”

A study by UCLA this year found that in California, 2.3 million Latinos are employed in roles most at risk of becoming automated. Of the Latinos working in automation, 21% of them do not have access to high-speed internet at home, a hindrance to digital upskilling, the study found. A 2021 Pew Research study also found that only 67% of Hispanic adults reported owning a desktop or laptop computer, compared with 8 in 10 white adults.

Diana Caba, vice president for community and economic development at the Hispanic Federation, said that as the world becomes more digital, it can become a huge obstacle for Latinos who do not have digital skills. Therefore the nonprofit has partnered with over 50 community-based organizations to decrease the digital divide by providing Latinos training, including access to technology.

Caba said that of the 35,000 people they’ve trained, 70% have completed the training. She said 6,000 have gone on to new jobs, and about 10,500 have gotten a six-figure salary.

“Automation is a big threat for our community, considering that we are in these low-paying sectors that are actually very vulnerable and don’t really offer much opportunities for economic mobility,” Caba said. “It’s been really exciting to take this approach and meet the needs of increasing the digital acumen of the Latino community.”

Dylan Pravia started as an intern with Miami Ed Tech, which he said provided him the opportunity to use computers and learn about AI and programming. He is now able to use what he learned there as a mechanical engineering student at Florida International University.

“I really like what Miami Ed Tech is doing,” Pravia said. “They are providing that training, that opportunity, to students. Talent is everywhere, and we should give (Latino) students as much training as they can, all those opportunities, so we can see the talent and their skills in the world.”

Gills, the former hotel receptionist, said she understands technology is the future but automation feels like a double-edged sword. While it can open opportunities for high-skilled labor, it also diminishes the number of low-skill jobs that many Latinos need, she said.

“I’ve seen firsthand how many co-workers were swapped for kiosks or chatbots,” Gills said. “Retraining workers and preparing them for new jobs needs to be better supported by the system. To be honest, these times with automation and AI feel very unreliable and constantly changing, but I guess we just have to learn how to adapt.”

___

The Associated Press received financial support from the Sony Global Social Justice Fund to expand certain coverage areas. AP is solely responsible for all content. Find AP’s standards for working with philanthropies, a list of supporters and funded coverage areas at AP.org.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
60,163
Reputation
9,006
Daps
166,252



Elon Musk’s A.I.-Fuelled War on Human Agency​


Musk seeks not only to dismantle the federal government but to install his own technological vision of the future at its heart—techno-fascism by chatbot.

By Kyle Chayka

February 12, 2025

White House Virus Technology Ooze Digital


Illustration by Ariel Davis

Not long ago, the American public could have been forgiven for thinking of Elon Musk’s vaunted Department of Government Efficiency (DOGE) as a version of a familiar Republican cost-cutting, government-shrinking project. The man who took over Twitter, now X, and slashed its staff by around eighty per cent would take a similarly aggressive tack against bureaucratic inefficiency, reining in budgets and laying off federal employees. In the past couple of weeks, though, it’s become clear that Musk’s aim within the Trump Administration goes further: he wants not only to reduce the U.S. government but to install his own technological vision of the future at its heart.

To run his agency, Musk brought on a group of tech-company managers and inexperienced twentysomethings whose credentials included internships at SpaceX. We watched as this crew began interrogating federal employees about their jobs, interfering with the system that controls payments at the Treasury Department, and trawling government budgets while Musk used X to call out the agencies and programs in his crosshairs. The team was aided in this demolition job by a suite of tools from the burgeoning field of artificial intelligence. Thomas Shedd, a former Tesla software engineer who is now a deputy commissioner at the Federal Acquisition Service, recently told workers at the General Services Administration that the agency will be driven by an “A.I.-first strategy,” which includes plans for a chatbot to analyze its contracts. DOGE is reportedly using A.I. software to identify potential budget reductions at the Department of Education. Anecdotes are circulating about A.I. filters that scan Department of Treasury grant proposals for forbidden terms—including “climate change” and “gender identity”—and then block the proposals. “Everything that can be machine-automated will be,” one government official told the Washington Post. “And the technocrats will replace the bureaucrats.”

The federal government is, in effect, suddenly being run like an A.I. startup; Musk, an unelected billionaire, a maestro of flying cars and trips to Mars, has made the United States of America his grandest test case yet for an unproved and unregulated new technology. He is hardly alone in his efforts to frame A.I. as a societal savior that will usher in a utopian era of efficiency. The tech investor Marc Andreessen recently posted on X that wages will “logically, necessarily” crash in the A.I. era—but that A.I. will also solve the problem, by reducing the price of “goods and services” to “near zero.” (Any explanation of how that would happen was not forthcoming.) Last month, Sam Altman, the C.E.O. of OpenAI and perhaps Musk’s primary nemesis, launched a five-hundred-billion-dollar data-center initiative called Stargate with the coöperation of Trump. But Musk, with his position as a close Presidential adviser, and with office space in the White House complex, is uniquely and unprecedentedly poised to fuse the agendas of government and Silicon Valley. (On Monday, in what looked like an effort to troll Altman and derail an investment deal, Musk led a group of investors in a nearly hundred-billion-dollar bid to acquire OpenAI.) In a recent article for the advocacy nonprofit Tech Policy Press, the respected A.I. researcher Eryk Salvaggio labelled Musk’s activities as an “AI coup.”

A government run by people is cautious and slow by design; a machine-automated version will be fast and ruthless, reducing the need for either human labor or human decision-making. Musk’s program has already halted operations altogether at the U.S. Agency for International Development, which was responsible for more than forty billion dollars in foreign aid in 2023, and at the Consumer Financial Protection Bureau, an agency that may have drawn Musk’s special notice for its track record of suing tech companies for deploying loosely regulated technology. Trump and Musk both love to blame the country’s problems on the so-called deep state, the federal employees who maintain the government’s day-to-day operations. As many of those people now find themselves locked out of their offices, with their work phones deactivated, a new, inherently undemocratic deep state is moving in to fill the void: a system imposed by machines and the tiny élite who designed them. With DOGE, Musk is not only sidelining Congress and threatening to defy the courts, helping to bring the country to the point of constitutional crisis; he is also smuggling into our federal bureaucracy the seeds of a new authoritarian regime—techno-fascism by chatbot.

Some policy-related decisions in our daily lives are already made with the help of artificial intelligence. A 2020 government-commissioned report identified the use of A.I. tools in departments including the S.E.C. and the Social Security Administration; OpenAI already runs ChatGPT Gov, a self-hosted version of its chatbot that’s designed for secure government use. But the Muskian technocracy aims for something more expansive, using artificial intelligence to supplant the messy mechanisms of democracy itself. Human judgment is being replaced by answers spit out by machines without reasoned debate or oversight: cut that program, eliminate this funding, fire those employees. One of the alarming aspects of this approach is that A.I., in its current form, is simply not effective enough to replace human knowledge or reasoning. Americans got a taste of the technology’s shortcomings during the Super Bowl on Sunday, when a commercial for Google’s Gemini A.I. that ran in Wisconsin claimed, erroneously, that Gouda made up more than half of all global cheese consumption. Musk, though, appears to have few qualms about touting A.I.’s conclusions as fact. Earlier this month, on X, he accused “career Treasury officials” of breaking the law by paying vouchers that were not approved by Congress. His evidence for this claim was a passage about the law generated by Grok, X’s A.I. model, as if the program were his lawyer. (Actual human legal experts quickly disputed the claim.)

It will not be hard for Musk to yoke his vision of government by A.I. to a narrative of American exceptionalism that the MAGA crowd can get behind. Recently, a Chinese A.I. company called DeepSeek released an open-source model that produced results rivalling OpenAI’s, using far fewer resources. This stark evidence of foreign technological competition has provided cover to tech companies to push for more aggressive A.I. development in the U.S., an A.I. iteration of the space race. Already, Trump has begun rolling back the Biden Administration’s efforts at A.I. regulation. Ultimately, though, Musk’s push for A.I. in government may be best understood as a marketing tactic for a technology that Silicon Valley sees as an investment too big to fail. A.I. is meant to be powerful enough to rule the world, so rule the world it must. In a recent blog post, Altman heralded artificial general intelligence, a hypothetical A.I. model that meets or exceeds human cognitive abilities, as “just another tool in this ever-taller scaffolding of human progress we are building together,” though he admitted that “the balance of power between capital and labor could easily get messed up.” Of course, what tech entrepreneurs deem progress doesn’t always align with more prosaic understandings of the collective good. Musk’s position in the White House might teach us that disruption is more tolerable in our social networks than in our Social Security checks.
 
Top