bnew

Veteran
Joined
Nov 1, 2015
Messages
56,130
Reputation
8,239
Daps
157,835
You can sign up right now and the US army will teach you how to kill 1000 people. That's gotta be the dumbest argument I ever heard.
not really, an A.I assistant paired with a psychopath willing to ruthlessly carry out every instruction given is a dangerous combination. for years prosecutors have used peoples incriminating google searches to help get a conviction. It's every conceivable A.I could help an individual commit the perfect crime.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
56,130
Reputation
8,239
Daps
157,835
<a href="Bill Gates says A.I. chatbots will teach kids to read within 18 months: You'll be 'stunned by how it helps'


Bill Gates says A.I. chatbots will teach kids to read within 18 months: You’ll be ‘stunned by how it helps’​

Published Sat, Apr 22 20238:30 AM EDT
Tom Huddleston Jr.

107195139-16764820942023-02-15t172517z_782088734_rc2tbz9o904n_rtrmadp_0_britain-politics-sunak-gates.jpeg

Microsoft founder Bill Gates reacts during a visit with Britain’s Prime Minister Rishi Sunak of the Imperial College University, in London, Britain, February 15, 2023.Justin Tallis | Pool | Reuters

Soon, artificial intelligence could help teach your kids and improve their grades.

That’s according to billionaire Microsoft co-founder Bill Gates, who says AI chatbots are on track to help children learn to read and hone their writing skills in 18 months time.

“The AI’s will get to that ability, to be as good a tutor as any human ever could,” Gates said in a keynote talk on Tuesday at the ASU+GSV Summit in San Diego.

AI chatbots, like OpenAI’s ChatGPT and Google’s Bard, have developed rapidly over the past several months, and can now compete with human-level intelligence on certain standardized tests. That growth has sparked both excitement over the technology’s potential and debate over the possible negative consequences.

Count Gates in the camp of people who are impressed. Today’s chatbots have “incredible fluency at being able to read and write,” which will soon help them teach students to improve their own reading and writing in ways that technology never could before, he said.

“At first, we’ll be most stunned by how it helps with reading — being a reading research assistant — and giving you feedback on writing,” said Gates.

Historically, teaching writing skills has proven to be an incredibly difficult task for a computer, Gates noted. When teachers give feedback on essays, they look for traits like narrative structure and clarity of prose — a “high-cognitive exercise” that’s “tough” for developers to replicate in code, he said.

But AI chatbots’ ability to recognize and recreate human-like language changes that dynamic, proponents say.

Kevin Roose, a New York Times tech columnist, wrote last month that he’s already used programs ChatGPT to improve his writing, using the AI’s ability to quickly search through style guides online.

Some academics say they’re impressed by chatbots’ ability to summarize and offer feedback on pieces of text, or even to write full essays themselves.

However, those same academics warn that the technology is not yet fully formed, and can inadvertently introduce significant errors or misinformation. And AI technology must improve at reading and recreating human language to better motivate students before it can become a viable tutor, Gates said.

“If you just took the next 18 months, the AIs will come in as a teacher’s aide and give feedback on writing,” said Gates. “And then they will amp up what we’re able to do in math.”

The idea that chatbots will excel at reading and writing before math may be somewhat surprising: Algebra and calculus are often used to develop AI technology.

But chatbots, which are trained on large datasets, often struggle with mathematical calculations, experts note. If a solved math equation already exists within the datasets the chatbot is trained on, it can provide you with the answer. But calculating its own solution is a different story.

Gates said he regularly asks Microsoft AI developers why chatbots can’t perform relatively simple calculations, or even multiply some numbers. The answer: AI needs improved reasoning abilities to handle the complexity of a math calculation.

It may take some time, but Gates is confident the technology will improve, likely within two years, he said. Then, it could help make private tutoring available to a wide swath of students who might otherwise be unable to afford it.

That’s not to say it’ll be free, though. ChatGPT and Bing both have limited free versions now, but the former rolled out a $20-per-month subscription plan called ChatGPT Plus in February.

Still, Gates said it’ll at least be more affordable and accessible than one-on-one tutoring with a human instructor.

“This should be a leveler,” he said. “Because having access to a tutor is too expensive for most students — especially having that tutor adapt and remember everything that you’ve done and look across your entire body of work.”"></a>
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
56,130
Reputation
8,239
Daps
157,835

Artificial intelligence is infiltrating health care. We shouldn’t let it make all the decisions.​


AI paternalism could put patient autonomy at risk—if we let it.

By Jessica Hamzelouarchive page
April 21, 2023

med-paternalism.jpeg

STEPHANIE ARNETT/MITTR | GETTY IMAGES

Would you trust medical advice generated by artificial intelligence? It’s a question I’ve been thinking over this week, in view of yet more headlines proclaiming that AI technologies can diagnose a range of diseases. The implication is often that they’re better, faster, and cheaper than medically trained professionals.

Many of these technologies have well-known problems. They’re trained on limited or biased data, and they often don’t work as well for women and people of color as they do for white men. Not only that, but some of the data these systems are trained on are downright wrong.

There's another problem. As these technologies begin to infiltrate health-care settings, researchers say we’re seeing a rise in what’s known as AI paternalism. Paternalism in medicine has been problematic since the dawn of the profession. But now, doctors may be inclined to trust AI at the expense of a patient’s own lived experiences, as well as their own clinical judgment.

AI is already being used in health care. Some hospitals use the technology to help triage patients. Some use it to aid diagnosis, or to develop treatment plans. But the true extent of AI adoption is unclear, says Sandra Wachter, a professor of technology and regulation at the University of Oxford in the UK.
“Sometimes we don’t actually know what kinds of systems are being used,” says Wachter. But we do know that their adoption is likely to increase as the technology improves and as health-care systems look for ways to reduce costs, she says.

Research suggests that doctors may already be putting a lot of faith in these technologies. In a study published a few years ago, oncologists were asked to compare their diagnoses of skin cancer with the conclusions of an AI system. Many of them accepted the AI’s results, even when those results contradicted their own clinical opinion.

There’s a very real risk that we’ll come to rely on these technologies to a greater extent than we should. And here’s where paternalism could come in.

“Paternalism is captured by the idiom ‘the doctor knows best,’” write Melissa McCradden and Roxanne Kirsch of the Hospital for Sick Children in Ontario, Canada, in a recent scientific journal paper. The idea is that medical training makes a doctor the best person to make a decision for the person being treated, regardless of that person’s feelings, beliefs, culture, and anything else that might influence the choices any of us make.

“Paternalism can be recapitulated when AI is positioned as the highest form of evidence, replacing the all-knowing doctor with the all-knowing AI,” McCradden and Kirsch continue. They say there is a “rising trend toward algorithmic paternalism.” This would be problematic for a whole host of reasons.

For a start, as mentioned above, AI isn’t infallible. These technologies are trained on historical data sets that come with their own flaws. “You’re not sending an algorithm to med school and teaching it how to learn about the human body and illnesses,” says Wachter.

As a result, “AI cannot understand, only predict,” write McCradden and Kirsch. An AI could be trained to learn which patterns in skin cell biopsies have been associated with a cancer diagnosis in the past, for example. But the doctors who made those past diagnoses and collected that data might have been more likely to miss cases in people of color.

And identifying past trends won’t necessarily tell doctors everything they need to know about how a patient’s treatment should continue. Today, doctors and patients should collaborate in treatment decisions. Advances in AI use shouldn’t diminish patient autonomy.

So how can we prevent that from happening? One potential solution involves designing new technologies that are trained on better data. An algorithm could be trained on information about the beliefs and wishes of various communities, as well as diverse biological data, for instance. Before we can do that, we need to actually go out and collect that data—an expensive endeavor that probably won’t appeal to those who are looking to use AI to cut costs, says Wachter.

Designers of these AI systems should carefully consider the needs of the people who will be assessed by them. And they need to bear in mind that technologies that work for some groups won’t necessarily work for others, whether that’s because of their biology or their beliefs. “Humans are not the same everywhere,” says Wachter.

The best course of action might be to use these new technologies in the same way we use well-established ones. X-rays and MRIs are used to help inform a diagnosis, alongside other health information. People should be able to choose whether they want a scan, and what they would like to do with their results. We can make use of AI without ceding our autonomy to it
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
56,130
Reputation
8,239
Daps
157,835
[/U]

Google’s AI panic forces merger of rival divisions, DeepMind and Brain​

Alphabet's two AI groups, which reportedly don't get along, are merging.​

RON AMADEO - Friday at undefined

Google DeepMind will presumably be getting a rainbow logo, but for now here's the old DeepMind logo.

Enlarge / Google DeepMind will presumably be getting a rainbow logo, but for now here's the old DeepMind logo.


Google's 'Code Red' panic over the rise of ChatGPT and its failure to excite the world with its AI products is resulting in a big merger. Alphabet's two big AI teams, the independent Alphabet company DeepMind and the "Google Brain" AI division, are merging to form "Google DeepMind." Google and DeepMind have both released blog posts. Google CEO Sundar Pichai says merging the two units will "help us build more capable systems more safely and responsibly."

DeepMind was a UK AI research lab that Google acquired in 2014. Since then it has lived as an independent Alphabet company, separate from Google, with CEO Demis Hassabis. DeepMind most famously captured the world's attention around 2017 when it built AlphaGo, a computer taught to play the ancient Chinese game of Go. This was previously thought to be an impossible task for computers due to the incredible number (10360) of moves required to play the game. Since then, the company has tackled protein folding, video games like Starcraft II, the Wavenet voice synthesis system, and matrix multiplication.

The Google Brain division was run by Jeff Dean. This group has kept a much lower profile by only doing research and making incremental improvements to existing Google products. The Brain team invented and open-sourced the "Transformer" neural network architecture that led to chatbots like ChatGPT. (The "GPT" in "ChatGPT" stands for "Generative Pre-trained Transformer".)

DeepMind's leader, Hassabis, will now be CEO of the new Google DeepMind, and Dean, the leader of Google's AI division, will get the title of "chief scientist" at Google DeepMind, and while that might sound like he works under the Google DeepMind CEO, a CNBC report, which has Google's internal memo on the change, says Dean will report to Pichai.

If that dual-leadership arrangement sounds politically fraught, it is! A March report from The Information detailed that, in the wake of the ChatGPT "code red," the Google Brain team and DeepMind were forced to "pause grudges" built up from "years of intense rivalry" and work together. The report says the two groups "have seldom collaborated or shared computer code with one another" but now would be jointly building "Gemini," a recently started project that represents a second swing at a ChatGPT competitor, after Google Bard. The Information described that single project as a "forced marriage," and now, a month later, the two groups are fully merging.

A big concern of Google's Code Red panic is the company's perceived lack of impact in the AI space. OpenAI's ChatGPT and its Bing partnership both put AI front and center as the primary product, and that gives the impression that Google's AI technology has been left in the dust. That's a big problem for the stock market, which these days seems to be Google's primary focus. DeepMind has been great at generating publicity with research projects like AlphaGo, which generated a ton of tech news headlines and a glitzy livestreamed "Humanity versus computers" Go series.

DeepMind doesn't sell any products, though, and only bills Google for internal work. All of Google's AI productization work at DeepMind and Google Brain has treated AI as a background helper that does things like making searches slightly better or bringing small improvements to translation. The Google Assistant, which has been deprioritized, has probably been the only exception. In the DeepMind blog post, Hassabis seems to indicate that this will change, saying the new Google DeepMind wants to "create the next generation of AI breakthroughs and products across Google and Alphabet, and to do this in a bold and responsible way."
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
56,130
Reputation
8,239
Daps
157,835
Linux command line + <insert your fave GPT model> is a game-changer. I highly recommend everyone jump on board ASAP. There's something about talking to these Large Language Models via a command prompt that feels equal parts minimalistic and futuristic.

This dude reverse-engineered the websites so that you can connect to it very easily:

One of my fave tasks is just getting shorthand code/explanations on errors or even translating a normal request into a linux command:
Tv53eQf.png

(I do have some aliases in bash to ask questions more concisely btw)


Also, I want to mention Shell_GPT as well. It requires an OpenAI API Key for usage, but unlike the other options (AFAIK), it allows you to create numerous different chats (read: contexts) for conversations as well as set the role/backstory that you want ChatGPT to play in the convo. You can technically do this via prompt engineering, but if you want the context to remain in perpetuity, then roles are a Godsend (and saves you tokens if you are on a pay-as-you-go plan).

My current workflow is using one of the free versions at the top to ask general questions, and using the pay-as-you-go ShellGPT one for personal projects that require more attention-to-detail or that give the AI a more-detailed context.
:ehh:
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
56,130
Reputation
8,239
Daps
157,835

The Capacity for Moral Self-Correction in Large Language Models

snippet:

Abstract​
We test the hypothesis that language models trained with reinforcement learning from human feedback (RLHF) have the capability to “morally self-correct”—to avoid producing harmful outputs—if instructed to do so. We find strong evidence in support of this hypothesis across three different experiments, each of which reveal different facets of moral self-correction. We find that the capability for moral self-correction emerges at 22B model parameters, and typically improves with increasing model size and RLHF training. We believe that at this level of scale, language models obtain two capabilities that they can use for moral self-correction: (1) they can follow instructions and (2) they can learn complex normative concepts of harm like stereotyping, bias, and discrimination. As such, they can follow instructions to avoid certain kinds of morally harmful outputs. We believe our results are cause for cautious optimism regarding the ability to train language models to abide by ethical principles.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
56,130
Reputation
8,239
Daps
157,835

Scaling Transformer to 1M tokens and beyond with RMT​

Aydar Bulatov, Yuri Kuratov, Mikhail S. Burtsev
This technical report presents the application of a recurrent memory to extend the context length of BERT, one of the most effective Transformer-based models in natural language processing. By leveraging the Recurrent Memory Transformer architecture, we have successfully increased the model's effective context length to an unprecedented two million tokens, while maintaining high memory retrieval accuracy. Our method allows for the storage and processing of both local and global information and enables information flow between segments of the input sequence through the use of recurrence. Our experiments demonstrate the effectiveness of our approach, which holds significant potential to enhance long-term dependency handling in natural language understanding and generation tasks as well as enable large-scale context processing for memory-intensive applications.


edit:

 
Last edited:

jadillac

Veteran
Joined
Apr 30, 2012
Messages
54,567
Reputation
8,607
Daps
167,036
If I want to make an AI singing voice my MY voice (not Kanye or Drake etc) what's a good AI training model website/app?
 
Top