bnew

Veteran
Joined
Nov 1, 2015
Messages
56,111
Reputation
8,239
Daps
157,792

IlsGle4.png
 

Vandelay

Life is absurd. Lean into it.
Joined
Apr 14, 2013
Messages
23,512
Reputation
5,819
Daps
81,986
Reppin
Phi Chi Connection
@bnew Do you think they should go on a moratorium until they can legislate appropriately for large language models and other AI systems?
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
56,111
Reputation
8,239
Daps
157,792
@bnew Do you think they should go on a moratorium until they can legislate appropriately for large language models and other AI systems?
No

technology will always outpace legislation. :manny:

I have downloaded multiple large language models, some of them via torrents. what legislation could possibly stop people from using it or sharing it?
code is free speech and even if they could stop companies continuing further development, the tools are open source and nobody can stop that.
 
Last edited:

Vandelay

Life is absurd. Lean into it.
Joined
Apr 14, 2013
Messages
23,512
Reputation
5,819
Daps
81,986
Reppin
Phi Chi Connection
No

technology will always outpace legislation. :manny:

I have downloaded multiple large language models, some of them via torrents. what legislation could possibly stop people from using it or sharing it?
code is free speech and even if they could stop companies continuing further development, the tools are open source and nobody can stop that.
It will always outpace, but I don't exactly think free speech is the same thing. I'm not sure what your political leanings are, but wouldn't it be akin to laissez-faire capitalism in that it may cause more harm than good? The government is woefully behind on regulating technology, but isn't it concerning that other countries are cautiously regulating it because they don't exactly know what this is yet?
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
56,111
Reputation
8,239
Daps
157,792
It will always outpace, but I don't exactly think free speech is the same thing. I'm not sure what your political leanings are, but wouldn't it be akin to laissez-faire capitalism in that it may cause more harm than good? The government is woefully behind on regulating technology, but isn't it concerning that other countries are cautiously regulating it because they don't exactly know what this is yet?




I doubt it'll cause more harm than good, all technological advances come with their pitfalls, people protested against the use of electricity and tried warning the public about it's dangers and the dangers were real. electrocution, and electrical fires has killed so many people and still does but we wouldn't give it up to go back to life without electricity.

large language models, aren't filled with top secret information, it's just condensed information people can access to with fewer steps.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
56,111
Reputation
8,239
Daps
157,792

Learn Code FAST! - Easy Metaphors

ProductivitySoftware_developmentAcademicLearning

If you are struggling to understand a particular piece of code, algorithm, data structure, leetcode problem, or anything else related to computer science, MetaGPT is here to help! It will explain you the concept you are struggling it using easy to visualize metaphors and real world scenarios. Learning complicating coding concepts is much more intuitive now! Just paste your code/coding problem or concept bellow and it will explain it to you.​

 
Last edited:

bnew

Veteran
Joined
Nov 1, 2015
Messages
56,111
Reputation
8,239
Daps
157,792

WILL KNIGHT BUSINESSAPR 17, 2023 7:00 AM

OpenAI’s CEO Says the Age of Giant AI Models Is Already Over​


Sam Altman says the research strategy that birthed ChatGPT is played out and future strides in artificial intelligence will require new ideas.
https://www.facebook.com/dialog/feed?&display=popup&caption=OpenAI’s CEO Says the Age of Giant AI Models Is Already Over&app_id=719405864858490&link=https://www.wired.com/story/openai-ceo-sam-altman-the-age-of-giant-ai-models-is-already-over/?utm_source=facebook&utm_medium=social&utm_campaign=onsite-share&utm_brand=wired&utm_social-type=earned

Sam Altman


PHOTOGRAPH: JASON REDMOND/GETTY IMAGES

THE STUNNING CAPABILITIES of ChatGPT, the chatbot from startup OpenAI, has triggered a surge of new interest and investment in artificial intelligence. But late last week, OpenAI’s CEO warned that the research strategy that birthed the bot is played out. It's unclear exactly where future advances will come from.

OpenAI has delivered a series of impressive advances in AI that works with language in recent years by taking existing machine-learning algorithms and scaling them up to previously unimagined size. GPT-4, the latest of those projects, was likely trained using trillions of words of text and many thousands of powerful computer chips. The process cost over $100 million.

But the company’s CEO, Sam Altman, says further progress will not come from making models bigger. “I think we're at the end of the era where it's going to be these, like, giant, giant models,” he told an audience at an event held at MIT late last week. “We'll make them better in other ways.”


Altman’s declaration suggests an unexpected twist in the race to develop and deploy new AI algorithms. Since OpenAI launched ChatGPT in November, Microsoft has used the underlying technology to add a chatbot to its Bing search engine, and Google has launched a rival chatbot called Bard. Many people have rushed to experiment with using the new breed of chatbot to help with work or personal tasks.

Meanwhile, numerous well-funded startups, including Anthropic, AI21, Cohere, and Character.AI, are throwing enormous resources into building ever larger algorithms in an effort to catch up with OpenAI’s technology. The initial version of ChatGPT was based on a slightly upgraded version of GPT-3, but users can now also access a version powered by the more capable GPT-4.



Altman’s statement suggests that GPT-4 could be the last major advance to emerge from OpenAI’s strategy of making the models bigger and feeding them more data. He did not say what kind of research strategies or techniques might take its place. In the paper describing GPT-4, OpenAI says its estimates suggest diminishing returns on scaling up model size. Altman said there are also physical limits to how many data centers the company can build and how quickly it can build them.

Nick Frosst, a cofounder at Cohere who previously worked on AI at Google, says Altman’s feeling that going bigger will not work indefinitely rings true. He, too, believes that progress on transformers, the type of machine learning model at the heart of GPT-4 and its rivals, lies beyond scaling. “There are lots of ways of making transformers way, way better and more useful, and lots of them don’t involve adding parameters to the model,” he says. Frosst says that new AI model designs, or architectures, and further tuning based on human feedback are promising directions that many researchers are already exploring.

Each version of OpenAI’s influential family of language algorithms consists of an artificial neural network, software loosely inspired by the way neurons work together, which is trained to predict the words that should follow a given string of text.


The first of these language models, GPT-2, was announced in 2019. In its largest form, it had 1.5 billion parameters, a measure of the number of adjustable connections between its crude artificial neurons.

At the time, that was extremely large compared to previous systems, thanks in part to OpenAI researchers finding that scaling up made the model more coherent. And the company made GPT-2’s successor, GPT-3, announced in 2020, still bigger, with a whopping 175 billion parameters. That system’s broad abilities to generate poems, emails, and other text helped convince other companies and research institutions to push their own AI models to similar and even greater size.

After ChatGPT debuted in November, meme makers and tech pundits speculated that GPT-4, when it arrived, would be a model of vertigo-inducing size and complexity. Yet when OpenAI finally announced the new artificial intelligence model, the company didn’t disclose how big it is—perhaps because size is no longer all that matters. At the MIT event, Altman was asked if training GPT-4 cost $100 million; he replied, “It’s more than that.”

Although OpenAI is keeping GPT-4’s size and inner workings secret, it is likely that some of its intelligence already comes from looking beyond just scale. On possibility is that it used a method called reinforcement learning with human feedback, which was used to enhance ChatGPT. It involves having humans judge the quality of the model’s answers to steer it towards providing responses more likely to be judged as high quality.

The remarkable capabilities of GPT-4 have stunned some experts and sparked debate over the potential for AI to transform the economy but also spread disinformation and eliminate jobs. Some AI experts, tech entrepreneurs including Elon Musk, and scientists recently wrote an open letter calling for a six-month pause on the development of anything more powerful than GPT-4.

At MIT last week, Altman confirmed that his company is not currently developing GPT-5. “An earlier version of the letter claimed OpenAI is training GPT-5 right now,” he said. “We are not, and won't for some time.”
 

Macallik86

Superstar
Supporter
Joined
Dec 4, 2016
Messages
6,510
Reputation
1,372
Daps
21,233
Google's Bard chatbot now has the capability to help you with programming tasks. The tech giant said that coding has been one of its users' top requests, and now it has given Bard the ability to generate, debug and explain code. Bard can now write in 20 programming languages, including C++, Java, JavaScript and Python. It now also features integration with Google's other products and can export code to Colab, the company's cloud-based notebook environment for Python, as well as help users write functions for Sheets.

Aside from being able to generate code, Bard can now provide explanations for snippets of code. It could be especially useful if you've only just started learning programming, since it will show you why a particular block has the output that it has. And yes, Bard can now also help you debug code that isn't quite working like you want it to.


Google admits that at this point, Bard might spit out code that "doesn’t produce the expected output" or code that's incomplete. The tech giant advises double-checking and testing its responses first. In fact, Bard can debug its own output if you find that it's returning an error — just type in "this code didn’t work, please fix it." Finally, if you're not a programming beginner but still need a help of help, you can ask the chatbot to optimize your code by making it faster or more efficient.

jvoPUUM.png
The tech giant opened up access to its AI chatbot in March, though it still has a ways to go to reach the popularity that ChatGPT is enjoying today. To note, ChatGPT has the ability to write and improve existing code in several programming languages, as well, so Google giving Bard the capability is perhaps inevitable. Google has also been constantly improving Bard in other ways and previously rolled out upgrades powered by a new language model called PaLM. A few days ago, the company created an "experiment updates" page where you can see all the changes Bard has gone through as it adds even more features and performs bug fixes.


Late to the party relative to Chat-GPT but it appears to have a more intuitive breakdown of code. I might pivot to Bard for code-related issues
 
Top