bnew

Veteran
Joined
Nov 1, 2015
Messages
51,004
Reputation
7,865
Daps
147,240



The default LLM interface is text-in/text-out. Multi-modal LLMs (LMMs?) are text+image in, text out - includes GPT-4V but also LLaVa, Fuyu, and other open-source models.

We’ve created a base `MultiModalLLM` class to unify the interface between GPT-4V AND open-source:
✅ Supports completion endpoint with text + image docs
💡 Will soon support chat endpoint with messages

We’ve created a docs hub for multi-modal models 🏡. We’ve also created brand-new guides for both GPT-4V AND LLaVa making use of this class.

Hub: docs.llamaindex.ai/en/latest…
GPT-4V: docs.llamaindex.ai/en/latest…
LLaVA + Fuyu: docs.llamaindex.ai/en/latest…
 

Slimkid07

Superstar
Supporter
Joined
Aug 13, 2013
Messages
5,957
Reputation
1,774
Daps
25,662
Does anyone else have issues opening this thread?
It takes multiple tries and reloads for me.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,004
Reputation
7,865
Daps
147,240
Sorry.
I'm using Chrome vers 118.0.5993.71 and Brave ver 1.59.124 Chromium

how many posts per thread is your account settings configured for? mines is 15 posts per thread page.

look at the CPU and/or memory usage in the browser task manager 'Shift+ESC'

from time to time I have to hit 'end process' on 'GPU process' in the browser task manager but thats because i'm use to having 200+ tabs opened and I do it often so my browser doesn't crash unexpectedly with it starts using 2GB+ of vram.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,004
Reputation
7,865
Daps
147,240


Samsung launches generative AI model made for its devices, beating Apple to the punch​

PUBLISHED WED, NOV 8 20237:23 AM EST
Arjun Kharpal@ARJUNKHARPAL

KEY POINTS
  • Samsung Electronics on Wednesday introduced a new generative artificial intelligence model called Samsung Gauss.
  • The company said its technology, called Samsung Gauss, is designed for AI applications on devices.
  • If this tech comes to its smartphones, Samsung would be amongst the first handset makers in the world to introduce generative AI to its devices, even ahead of Apple.
The Samsung Group logo displayed on its stand during the Mobile World Congress 2023 on March 2, 2023, in Barcelona, Spain.

Samsung at the Mobile World Congress 2023 on March 2, 2023, in Barcelona, Spain. (Photo by Joan Cros/NurPhoto via Getty Images)

Joan Cros | Nurphoto | Getty Images

Samsung Electronics on Wednesday introduced a new generative artificial intelligence model, hinting that the software will be coming to its devices soon.

Generative AI has been popularized by Microsoft-backed OpenAI and ChatGPT, which was launched last year and has sparked a race amongst the world’s technology giants to get ahead. Generative AI is underpinned by so-called large language models, which are trained on large amounts of data.

Samsung said its technology — called Samsung Gauss — is designed for AI applications on devices.


Samsung said its AI model is named after Carl Friedrich Gauss, “the legendary mathematician who established normal distribution theory, the backbone of machine learning and AI.”

The AI system has a number of features: the first, Samsung Gauss Language, is a generative language model which can help compose emails and translate content, as well as “enhance the consumer experience by enabling smarter device control when integrated into products.”

The second feature allows in-house software developers to write code quickly, while the third, Samsung Gauss Image, can generate or edit images.
“Samsung Gauss is currently used on employee productivity but will be expanded to a variety of Samsung product applications to provide new user experience in the near future,” Samsung said in a press release on Wednesday.

Samsung could be amongst the first handset makers in the world to introduce generative AI to its devices. Daniel Araujo, Samsung’s vice president of the mobile business, said on the company’s earnings call last month that generative AI technology will likely reach customers next year.
Apple, Samsung’s biggest rival in the smartphone space, is yet to announce a generative AI product of its own. Company CEO Tim Cook said last month during the Apple earnings call that the tech giant has “work going on” in generative AI, but declined to give more details.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,004
Reputation
7,865
Daps
147,240

Hugging Face has a two-person team developing ChatGPT-like AI models​

Kyle Wiggers@kyle_l_wiggers / 1:15 PM EST•November 8, 2023
Comment
happy face stuck in the sand

Image Credits: Hugging Face

AI startup Hugging Face offers a wide range of data science hosting and development tools, including a GitHub-like portal for AI code repositories, models and datasets, as well as web dashboards to demo AI-powered applications.

But some of Hugging Face’s most impressive — and capable — tools these days come from a two-person team that was formed just in January.

H4, as it’s called — “H4” being short for “helpful, honest, harmless and huggy” — aims to develop tools and “recipes” to enable the AI community to build AI-powered chatbots along the lines of ChatGPT. ChatGPT’s release was the catalyst for H4’s formation, in fact, according to Lewis Tunstall, a machine learning engineer at Hugging Face and one of H4’s two members.

“When ChatGPT was released by OpenAI in late 2022, we started brainstorming on what it might take to replicate its capabilities with open source libraries and models,” Tunstall told TechCrunch in an email interview. “H4’s primary research focus is around alignment, which broadly involves teaching LLMs how to behave according to feedback from humans (or even other AIs).”

H4 is behind a growing number of open source large language models, including Zephyr-7B-α, a fine-tuned, chat-centric version of the eponymous Mistral 7B model recently released by French AI startup Mistral. H4 also forked Falcon-40B, a model from the Technology Innovation Institute in Abu Dhabi — modifying the model to respond more helpfully to requests in natural language.

To train its models, H4 — like other research teams at Hugging Face — relies on a dedicated cluster of more than 1,000 Nvidia A100 GPUs. Tunstall and his other H4 co-worker, Ed Beeching, are based remotely in Europe, but receive support from several internal Hugging Face teams, among them the model testing and evaluation team.
“The small size of H4 is a deliberate choice, as it allows us to be more nimble and adapt to an ever-changing research landscape,” Beeching told TechCrunch via email. “We also have several external collaborations with groups such as LMSYS and LlamaIndex, who we collaborate with on joint releases.”


Lately, H4 has been investigating different alignment techniques and building tools to test how well techniques proposed by the community and industry really work. The team this month released a handbook containing all the source code and datasets they used to build Zephyr, and H4 plans to update the handbook with code from its future AI models as they’re released.


I asked whether H4 had any pressure from Hugging Face higher-ups to commercialize their work. The company, after all, has raised hundreds of millions of dollars from a pedigreed cohort of investors that includes Salesforce, IBM, AMD, Google, Amazon Intel and Nvidia. Hugging Face’s last funding round valued it at $4.5 billion — reportedly more than 100 times the company’s annualized revenue.

Tunstall said that H4 doesn’t directly monetize its tools. But he acknowledged that the tools do feed into Hugging Face’s Expert Acceleration Program, Hugging Face’s enterprise-focused offering that provides guidance from Hugging Face teams to build custom AI solutions.

Asked if he sees H4 in competition with other open source AI initiatives, like EleutherAI and LAION, Beeching said that it isn’t H4’s objective. Rather, he said, the intention is to “empower” the open AI community by releasing the training code and datasets associated with H4’s chat models.
“Our work would not be possible without the many contributions from the community,” Beeching said.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,004
Reputation
7,865
Daps
147,240

Amazon Aims to One Up ChatGPT With 'Olympus' AI​

According to internal sources, Amazon is working on a new AI dubbed 'Olympus.' Little is known about it besides a large goal of doubling ChatGPT's 1 trillion parameters.
https://www.pcmag.com/authors/josh-hendrickson

By Josh Hendrickson

November 8, 2023

https://xenforo.com/community/javascript:void(0)
The Amazon logo on a building
(Sundry Photography/Shutterstock.com)

OpenAI's ChatGPT has already prompted rivals to launch their own chatbots, from Bard to Grok, and now Amazon is reportedly getting in on the AI action.

Citing inside sources, Reuters reports that Amazon is investing millions to train a new large language model (LLM), dubbed Olympus, with the intent of producing an AI superior to ChatGPT and Bard. The target is to create an AI with 2 trillion parameters, roughly double ChatGPT's current 1 trillion parameters. The Information was first to report the news.

In LLMs, parameters are internal variables that help AI make decisions and predictions. These parameters work together to encode linguistic information and interpret complex language data. Adjusting parameters during the training process can enhance the AI's ability to process language and grasp new concepts.

Theoretically, with more parameters comes "greater understanding" and the ability to better connect various nuanced concepts to each other. So by training an LLM with double the parameters, Amazon could create an AI far more powerful than ChatGPT. But it isn't quite that simple: Creating an AI with more parameters is an intense process, could lead to slower results, and requires a lot more energy. Some of the downsides may outweigh the benefits, and it's hard to say how much more powerful "Olympus" really would be than ChatGPT.

This isn't Amazon's first time investing in AI. Previous rumors suggested the company delayed another LLM, Titan, as it reportedly didn't perform well compared to ChatGPT. The company is also working on rolling out other AI tools to help with shop listings, including the ability to generate backgrounds for product images and product descriptions.

If Amazon can perfect a useful LLM with a large breadth of knowledge, it could go a long way to improving one of its other AI products: Alexa.


Amazon already teased "a smarter and more conversational Alexa, powered by generative AI" at its fall event. This next-gen Alexa is "based on a new large language model (LLM) that’s been custom-built and specifically optimized for voice interactions, and the things we know our customers love—getting real-time information, efficient smart home control, and maximizing their home entertainment."

The company cautioned that it's "at the start of a journey," but a free preview will roll out to Alexa customers in the US soon, it said in September.

 

Slimkid07

Superstar
Supporter
Joined
Aug 13, 2013
Messages
5,957
Reputation
1,774
Daps
25,662
how many posts per thread is your account settings configured for? mines is 15 posts per thread page.

look at the CPU and/or memory usage in the browser task manager 'Shift+ESC'

from time to time I have to hit 'end process' on 'GPU process' in the browser task manager but thats because i'm use to having 200+ tabs opened and I do it often so my browser doesn't crash unexpectedly with it starts using 2GB+ of vram.
Thanks, I'll try that.
My account settings are set to 50 posts per thread. With the exception of the occasional thread with hundreds of tweets in them (Napolean), this is the first time I had issues opening up a thread.
 
Top