REVEALED: Open A.I. Staff Warn "The progress made on Project Q* has the potential to endanger humanity" (REUTERS)

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,575
Reputation
8,224
Daps
157,005
This

If unleashing super intelligent AI dooms humanity what's in it for them? Profits are meaningless in that scenario. This can't even be purely about greed. Maybe recklessness if they want to still roll the dice and progress with this development.

development is happening with or without them.
 

Professor Emeritus

Veteran
Poster of the Year
Supporter
Joined
Jan 5, 2015
Messages
51,330
Reputation
19,656
Daps
203,838
Reppin
the ether
development is happening with or without them.


But distinct individuals and teams have personally sped development. You could take just a couple of teams out of the picture and we'd be years behind where we are right now.

Plus, the fact that some are blindly speeding developments makes others freak out and feel they have to compete too. You saw how releasing Chat-GPT made everyone else release too. But if the fastest slowed down, others would slow as well.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,575
Reputation
8,224
Daps
157,005
But distinct individuals and teams have personally sped development. You could take just a couple of teams out of the picture and we'd be years behind where we are right now.

Plus, the fact that some are blindly speeding developments makes others freak out and feel they have to compete too. You saw how releasing Chat-GPT made everyone else release too. But if the fastest slowed down, others would slow as well.

highly unlikely that would happen now even if chatgpt slowed down because they are the industry leader right now and no one wants to be second best. theres like 5+ foundational models released from china alone that are pretty good. no one is stopping because further research into increasing the size of token/data input and, speeding up inference can lead to better content generation, massive discoveries, accolades and profit.


m4nkH6l.jpeg


the open source community ain't slowing down for anyone either, on practically a weekly basis I see discussions and proposals on making inference faster or improving accuracy etc.

you'd typically need a GPU with 48GB of VRAM to run a 70B quantized model locally but now it seems..


AirLLM optimizes inference memory usage, allowing 70B large language models to run inference on a single 4GB GPU card. No quantization, distillation, pruning or other model compression techniques that would result in degraded model performance are needed.
 
Last edited:

Professor Emeritus

Veteran
Poster of the Year
Supporter
Joined
Jan 5, 2015
Messages
51,330
Reputation
19,656
Daps
203,838
Reppin
the ether
highly unlikely that would happen now even if chatgpt slowed down because they are the industry leader right now and no one wants to be second best. theres like 5+ foundational models released from china alone that are pretty good. no one is stopping because further research into increasing the size of token/data input and, speeding up inference can lead to better content generation, massive discoveries, accolades and profit.


It's obviously tougher to slow down now than it would have been in 2016 or 2019. But it's not like LLM's are in their end stage form of General AI or anything. There might be major, difficult breakthroughs to come before we get there, and if any of the big players slow, it could have an impact.




The open source community ain't slowing down for anyone either, on practically a weekly basis I see discussions and proposals on making inference faster or improving accuracy etc.

you'd typically need a GPU with 48GB of VRAM to run a 70B quantized model locally but now it seems..

Yeah, releasing that open source model for such a short-sighted move. That's the sort of thing that should have been started after a year or two of dialogue and consultation, not on one person's whim. I was just reading an article a few days ago about how for a lot of people the goal is to work the open source model to make AI's that "break all the rules" - ones that will give racist answers, complete a fascist manifesto, tell you how to build a bomb, etc. etc.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,575
Reputation
8,224
Daps
157,005
It's obviously tougher to slow down now than it would have been in 2016 or 2019. But it's not like LLM's are in their end stage form of General AI or anything. There might be major, difficult breakthroughs to come before we get there, and if any of the big players slow, it could have an impact.

there big advantage big players have now is compute and the ability to afford it but it's not 100% given that it'll be the case in the future. peruse /r/machinelearning and check out how much research is being released by smaller companies and university's too.

microsoft released a research paper on improve an LLM but never released the model so the open source community went and built a dataset based on the research paper and finetuned some base models like llama and mistral and created their own model series call openorca. openai, google, amazon, microsoft, salesforce and anthropic could all stop releasing AI products and research and whats guaranteed to happen is that a number researchers and devs they employed would look for another employer or private equity looking to invest in a new company. researchers want props for their work too and META has been successfully recruiting employees to their AI teams with the notion that their work won't be completely silo'd and their work
won't be unattributed.




Abstract Recent research has focused on enhancing the capability of smaller models through imitation learning, drawing on the outputs generated by large foundation models (LFMs). A number of issues impact the quality of these models, ranging from limited imitation signals from shallow LFM outputs; small scale homogeneous training data; and most notably a lack of rigorous evaluation resulting in overestimating the small model’s capability as they tend to learn to imitate the style, but not the reasoning process of LFMs. To address these challenges, we develop Orca, a 13-billion parameter model that learns to imitate the reasoning process of LFMs. Orca learns from rich signals from GPT-4 including explanation traces; step-by-step thought processes; and other complex instructions, guided by teacher assistance from ChatGPT. To promote this progressive learning, we tap into large-scale and diverse imitation data with judicious sampling and selection. Orca surpasses conventional state-of-the-art instruction-tuned models such as Vicuna-13B by more than 100% in complex zero-shot reasoning benchmarks like BigBench Hard (BBH) and 42% on AGIEval. Moreover, Orca reaches parity with ChatGPT on the BBH benchmark and shows competitive performance (4 pts gap with optimized system message) in professional and academic examinations like the SAT, LSAT, GRE, and GMAT, both in zero-shot settings without CoT; while trailing behind GPT-4. Our research indicates that learning from step-by-step explanations, whether these are generated by humans or more advanced AI models, is a promising direction to improve model capabilities and skills.





ukej7cb.png


Dqzw6Ng.png


qTOvXWy.png


"We are working with our legal team to publicly release a diff of the model weights in accordance with LLaMA’s release policy to be published at Orca - Microsoft Research.

Work in progress."

can't wait:banderas:


Yeah, releasing that open source model for such a short-sighted move. That's the sort of thing that should have been started after a year or two of dialogue and consultation, not on one person's whim. I was just reading an article a few days ago about how for a lot of people the goal is to work the open source model to make AI's that "break all the rules" - ones that will give racist answers, complete a fascist manifesto, tell you how to build a bomb, etc. etc.

"a year or two of dialogue and consultation":gucci:

there are dozens of foundation/base large language models and thousands of fine-tuned models.

base models(unfiltered/partially-unfiltered)
nwhxBgA.png


uncensored AI can do a lot more than that though, it took less than week of using chatgpt for me to run into a message saying it won't help me scrape a website, I even ran into a similar response when testing llama-1 and llama-2 but that was due to the system prompt the host i was using configured. i haven't seen that sort of warning in months now tho.

people have been using closed source chatgpt for months now to create racist content and the ydidn't need AI to do either of the things you mentioned.

AI is just a tool and could be used to carry good or bad things like any other tool.






closed source AI has biases.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,575
Reputation
8,224
Daps
157,005

Microsoft joins OpenAI’s board with Sam Altman officially back as CEO


After a failed attempt to oust Altman by the board, OpenAI’s largest investor is getting a non-voting observer seat.​


1778705436.jpg

By Alex Heath, a deputy editor and author of the Command Line newsletter. He’s covered the tech industry for over a decade at The Information and other outlets.
Nov 29, 2023, 7:50 PM EST|15 Comments / 15 New

Sam Altman is officially OpenAI’s CEO again.

Just before Thanksgiving, the company said it had reached a deal in principle for him to return, and now it’s done. Microsoft is getting a non-voting observer seat on the nonprofit board that controls OpenAI as well, the company announced on Wednesday.

“I have never been more excited about the future,” Altman said in a memo to employees shared with The Verge. “I am extremely grateful for everyone’s hard work in an unclear and unprecedented situation, and I believe our resilience and spirit set us apart in the industry. I feel so, so good about our probability of success for achieving our mission.”

With three of the four board members who decided to suddenly fire Altman now gone, OpenAI’s new board consists of chair Bret Taylor, Larry Summers, and Adam D’Angelo, the only remaining holdout from the previous board.

OpenAI adding Microsoft to the board as a “non-voting observer” means that the tech giant will have more visibility into the company’s inner workings but not have an official vote in big decisions. Microsoft is a major investor in OpenAI, with a 49 percent stake in the for-profit entity that the nonprofit board controls. Until now, it’s had no visibility into that board. That led to a big surprise when Altman was ousted, threatening what has quickly become one of the most important partnerships in tech.

A spokesperson for Microsoft declined to comment on who from the company would fill its observer seat.

In his memo to employees, Altman said that he harbors “zero ill will” toward Ilya Sutskever, OpenAI’s co-founder and chief scientist who initially participated in the board coup and changed his mind after nearly all of the company’s employees threatened to quit if Altman didn’t come back. “While Ilya will no longer serve on the board, we hope to continue our working relationship and are discussing how he can continue his work at OpenAI,” Altman said.

“The fact that we did not lose a single customer will drive us to work even harder for you,” he told employees.

Below is Sam Altman’s full memo shared with OpenAI employees on Wednesday:

I am returning to OpenAI as CEO. Mira will return to her role as CTO. The new initial board will consist of Bret Taylor (Chair), Larry Summers, and Adam D’Angelo.​

I have never been more excited about the future. I am extremely grateful for everyone’s hard work in an unclear and unprecedented situation, and I believe our resilience and spirit set us apart in the industry. I feel so, so good about our probability of success for achieving our mission.

Before getting to what comes next, I’d like to share some thanks.

I love and respect Ilya, I think he’s a guiding light of the field and a gem of a human being. I harbor zero ill will towards him. While Ilya will no longer serve on the board, we hope to continue our working relationship and are discussing how he can continue his work at OpenAI.

I am grateful to Adam, Tasha, and Helen for working with us to come to this solution that best serves the mission. I’m excited to continue to work with Adam and am sincerely thankful to Helen and Tasha for investing a huge amount of effort in this process.

Thank you also to Emmett who had a key and constructive role in helping us reach this outcome. Emmett’s dedication to AI safety and balancing stakeholders’ interests was clear.

Mira did an amazing job throughout all of this, serving the mission, the team, and the company selflessly throughout. She is an incredible leader and OpenAI would not be OpenAI without her. Thank you.

Greg and I are partners in running this company. We have never quite figured out how to communicate that on the org chart, but we will. In the meantime, I just wanted to make it clear. Thank you for everything you have done since the very beginning, and for how you handled things from the moment this started and over the past few days.

The leadership team–Mira, Brad, Jason, Che, Hannah, Diane, Anna, Bob, Srinivas, Matt, Lilian, Miles, Jan, Wojciech, John, Jonathan, Pat, and many more–is clearly ready to run the company without me. They say one way to evaluate a CEO is how you pick and train your potential successors; on that metric I am doing far better than I realized. It’s clear to me that the company is in great hands, and I hope this is abundantly clear to everyone. Thank you all.

Jakub, Szymon, and Aleksander are exceptional talents and I’m so happy they have rejoined to move us and our research forward. Thank you.

To all of you, our team: I am sure books are going to be written about this time period, and I hope the first thing they say is how amazing the entire team has been. Now that we’re through all of this, we didn’t lose a single employee. You stood firm for each other, this company, and our mission. One of the most important things for the team that builds AGI safely is the ability to handle stressful and uncertain situations, and maintain good judgment throughout. Top marks. Thank you all.

Satya, Kevin, Amy, and Brad have been incredible partners throughout this, with exactly the right priorities all the way through. They’ve had our backs and were ready to welcome all of us if we couldn’t achieve our primary goal. We clearly made the right choice to partner with Microsoft and I’m excited that our new board will include them as a non-voting observer. Thank you.

To our partners and users, thank you for sticking with us. We really felt the outpouring of support and love, and it helped all of us get through this. The fact that we did not lose a single customer will drive us to work even harder for you, and we are all excited to get back to work.

Will Hurd, Brian Chesky, Bret Taylor and Larry Summers put their lives on hold and did an incredible amount to support the mission. I don’t know how they did it so well, but they really did. Thank you.

Ollie also put his life on hold this entire time to just do everything he could to help out, in addition to providing his usual unconditional love and support. Thank you and I love you.

So what’s next?

We have three immediate priorities.

● Advancing our research plan and further investing in our full-stack safety efforts, which have always been critical to our work. Our research roadmap is clear; this was a wonderfully focusing time. I share the excitement you all feel; we will turn this crisis into an opportunity! I’ll work with Mira on this.

● Continuing to improve and deploy our products and serve our customers. It’s important that people get to experience the benefits and promise of AI, and have the opportunity to shape it. We continue to believe that great products are the best way to do this. I’ll work with Brad, Jason and Anna to ensure our unwavering commitment to users, customers, partners and governments around the world is clear.

● Bret, Larry, and Adam will be working very hard on the extremely important task of building out a board of diverse perspectives, improving our governance structure, and overseeing an independent review of recent events. I look forward to working closely with them on these crucial steps so everyone can be confident in the stability of OpenAI.

I am so looking forward to finishing the job of building beneficial AGI with you all—best team in the world, best mission in the world.

Love, Sam

And here’s the full memo OpenAI board chair Bret Taylor sent to employees:

On behalf of the OpenAI Board, I want to express our gratitude to the entire OpenAI community, especially all the OpenAI employees, who came together to help find a path forward for the company over the past week. Your efforts helped enable this incredible organization to continue to serve its mission to ensure that artificial general intelligence benefits all of humanity. We are thrilled that Sam, Mira and Greg are back together leading the company and driving it forward. We look forward to working with them and all of you.​

As a Board, we are focused on strengthening OpenAI’s corporate governance. Here’s how we plan to do it:

● We will build a qualified, diverse Board of exceptional individuals whose collective experience represents the breadth of OpenAI’s mission – from technology to safety to policy. We are pleased that this Board will include a non-voting observer for Microsoft.

● We will further stabilize the OpenAI organization so that we can continue to serve our mission. This will include convening an independent committee of the Board to oversee a review of the recent events.

● We will enhance the governance structure of OpenAI so that all stakeholders – users, customers, employees, partners, and community members – can trust that OpenAI will continue to thrive.

OpenAI is a more important institution than ever before. ChatGPT has made artificial intelligence a part of daily life for hundreds of millions of people. Its popularity has made AI – its benefits and its risks – central to virtually every conversation about the future of governments, business, and society.

We understand the gravity of these discussions and the central role of OpenAI in the development and safety of these awe-inspiring new technologies. Each of you plays a critical part in ensuring that we effectively meet these challenges. We are committed to listening and learning from you, and I hope to speak with you all very soon.

We are grateful to be a part of OpenAI, and excited to work with all of you.

Thank you,

Bret Taylor Chair, OpenAI
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,575
Reputation
8,224
Daps
157,005


Nvidia CEO Jensen Huang says artificial general intelligence will be achieved in five years​

Aaron Mok

Nov 29, 2023, 4:56 PM EST

Jensen Huang, CEO of NVIDIA, holding a chip.

Nvidia CEO Jensen Huang said AGI will be achieved in five years during the 2023 NYT DealBook Summit.

Sam Yeh / Contributor



  • Nvidia CEO Jensen Huang said AGI will be reached in five years during the 2023 NYT DealBook Summit.
  • Huang defined AGI as tech that exhibits basic intelligence "fairly competitive" to a normal human.
  • Still, he admitted that AI technology is not quite there yet despite its rapid progress.

Jensen Huang, the CEO of Nvidia — one of the companies that is fueling the AI revolution — predicts that we may be able to see artificial general intelligence, or AGI, within the next five years.

During the 2023 New York Times DealBook Summit, the outlet's Andrew Ross Sorkin asked Huang if he expected to see AGI in the next 10 years.

"By depending on how you define it, I think the answer is yes," Huang replied.

At the summit, Huang defined AGI as a piece of software or a computer that can complete tests which reflect basic intelligence that's "fairly competitive" to that of a normal human.


"I would say that within the next five years, you're gonna see, obviously, AIs that can achieve those tests," Huang said.


While the CEO didn't specify what exactly he thinks AGI would look like, Ross Sorkin asked if AGI would refer to AI that can design the chips Nvidia is currently making, to which Huang agreed.

"Will you need to have the same staff that designs them?" Sorkin asked as a follow-up, referring to the development of Nvidia's chips.

"In fact, none of our chips are possible today without AI," Huang said.

He specified that the H-100 chips he said Nvidia is shipping today were designed with help from a number of AIs.

"Software can't be written without AI, chips can't be designed without AI, nothing's possible," he concluded on the point of AI's potential.

Even though Huang said that AI is developing faster than he expected, he said the technology hasn't showed signs it can exhibit or surpass complex human intelligence just yet.

"There's no question that the rate of progress is high," he said. "But there's a whole bunch of things that we can't do yet."

"This multi-step reasoning that humans are very good at, AI can't do," he said.

The CEO's thoughts on AGI come as some business leaders sound the alarm about what they personally consider to be AGI.

Ilya Sutskever, cofounder of OpenAI, the company behind ChatGPT, said that AI in its most advanced form will create new problems such as a surge in fake news and cyberattacks, automated AI weapons, and even "infinitely stable dictatorships."

Ian Hogarth, who has invested in more than 50 AI companies, said that a future "God-like AI" would lead to the "obsolescence or destruction of the human race" if the rapid development of the technology isn't regulated.

Huang isn't the only tech leader who believes that AGI will be achieved in the near future.

In February, ex-Meta executive John Carmack said that AGI will be achieved by the 2030s and be worth trillions of dollars.

A few months later, Demis Hassabis, CEO and cofounder of DeepMind, Google's AI division, predicted that AI that is as powerful as the human brain would arrive within the next few years.

Nvidia didn't immediately respond to Business Insider's request for comment.
 

IIVI

Superstar
Joined
Mar 11, 2022
Messages
11,327
Reputation
2,682
Daps
38,101
Reppin
Los Angeles
The fact it's reasoning is great news. No doubt I've been real intrigued by what A.I can do recently and even got placed with overseeing how A.I can be integrated at work this week.

That said, I still think A.I to the point where it can replace what a human can do is still decades away. Straight from the horse's mouth (timestamped):


If you think about it, right now it has access to so much data and can even learn recursively (make up faces and train off those faces for example), yet it still gets some highly basic questions very wrong. It has had plenty of data and opportunity to correct itself and learn from it's mistakes, yet some things are simply still not "clicking" for it. While we've made a ton of progress, we're still not as far as some people think imo.

There's also a downside about reasoning and learning: it means you can learn "bad habits" and reason why it's correct, when it's actually incorrect. You can strengthen bad habits and strong incorrect connections making the breakthroughs more difficult. I have a feeling this will be a big pain point for A.I as well like it is for plenty of people.

I think the truly dangerous part about A.I right now is how much people are blowing it up out of proportion, which is leading to people being fired and losing their jobs over hype and hoopla. We're definitely in the buzzword phase of a cycle and many people in positions of power are overestimating it while many ordinary/laypeople are underestimating it.

Do I think it'll take 30-50 years like said in the video? Probably not and I hope not, I'd love to see it in my lifetime but there's a very real possibility that may not happen.



Meta’s chief scientist and deep learning pioneer Yann LeCun said he believes that current AI systems are decades away from reaching some semblance of sentience, equipped with common sense that can push their abilities beyond merely summarizing mountains of text in creative ways.

His point of view stands in contrast to that of Nvidia CEO Jensen Huang, who recently said AI will be “fairly competitive” with humans in less than five years, besting people at a multitude of mentally intensive tasks.

“I know Jensen,” LeCun said at a recent event highlighting the Facebook parent company’s 10-year anniversary of its Fundamental AI Research team. LeCun said the Nvidia CEO has much to gain from the AI craze. “There is an AI war, and he’s supplying the weapons.”

″[If] you think AGI is in, the more GPUs you have to buy,” LeCun said, about technologists attempting to develop artificial general intelligence, the kind of AI on par with human-level intelligence. As long as researchers at firms such as OpenAI continue their pursuit of AGI, they will need more of Nvidia’s computer chips.

Society is more likely to get “cat-level” or “dog-level” AI years before human-level AI, LeCun said. And the technology industry’s current focus on language models and text data will not be enough to create the kinds of advanced human-like AI systems that researchers have been dreaming about for decades.

“Text is a very poor source of information,” LeCun said, explaining that it would likely take 20,000 years for a human to read the amount of text that has been used to train modern language models. “Train a system on the equivalent of 20,000 years of reading material, and they still don’t understand that if A is the same as B, then B is the same as A.”

“There’s a lot of really basic things about the world that they just don’t get through this kind of training,” LeCun said.

Hence, LeCun and other Meta AI executives have been heavily researching how the so-called transformer models used to create apps such as ChatGPT could be tailored to work with a variety of data, including audio, image and video information. The more these AI systems can discover the likely billions of hidden correlations between these various kinds of data, the more they could potentially perform more fantastical feats, the thinking goes.


Some of Meta’s research includes software that can help teach people how to play tennis better while wearing the company’s Project Aria augmented reality glasses, which blend digital graphics into the real world. Executives showed a demo in which a person wearing the AR glasses while playing tennis was able to see visual cues teaching them how to properly hold their tennis rackets and swing their arms in perfect form. The kinds of AI models needed to power this type of digital tennis assistant require a blend of three-dimensional visual data in addition to text and audio, in case the digital assistant needs to speak.

These so-called multimodal AI systems represent the next frontier, but their development won’t come cheap. And as more companies such as Meta and Google parent Alphabet research more advanced AI models, Nvidia could stand to gain even more of an edge, particularly if no other competition emerges.

That's actually crazy to consider is we're not even at the "animal" level of A.I yet. That will be insane when we get there though.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,575
Reputation
8,224
Daps
157,005




That's actually crazy to consider is we're not even at the "animal" level of A.I yet. That will be insane when we get there though.


even animal level A.I won't be animal level A.I for long due to exponential growth. would we even notice when it gets to that level and evolves from it?
 

null

...
Joined
Nov 12, 2014
Messages
29,212
Reputation
4,891
Daps
46,425
Reppin
UK, DE, GY, DMV
"The reports about the Q* model breakthrough that you all recently made, what’s going on there?

Altman:
No particular comment on that unfortunate leak. But what we have been saying — two weeks ago, what we are saying today, what we’ve been saying a year ago, what we were saying earlier on — is that we expect progress in this technology to continue to be rapid, and also that we expect to continue to work very hard to figure out how to make it safe and beneficial. That’s why we got up every day before. That’s why we will get up every day in the future. I think we have been extraordinarily consistent on that.

Without commenting on any specific thing or project or whatever, we believe that progress is research. You can always hit a wall, but we expect that progress will continue to be significant. And we want to engage with the world about that and figure out how to make this as good as we possibly can."


 
Top