The Pledge

THE PRICE OF THE BRICK GOING UP!
Joined
Dec 13, 2019
Messages
4,821
Reputation
1,718
Daps
22,708
HER is here

I’d love to work at OpenAI. This shyt is game changing.

I can’t wait to try ChatGPT4o
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,753
Reputation
7,916
Daps
148,554
May 13, 2024

Hello GPT-4o

We’re announcing GPT-4o, our new flagship model that can reason across audio, vision, and text in real time.

Contributions
https://openai.com/gpt-4o-contributions/
Try on ChatGPT
https://chat.openai.com/
(opens in a new window)Try in Playground
https://platform.openai.com/playground?mode=chat&model=gpt-4o
(opens in a new window)Rewatch live demos
https://openai.com/index/spring-update/

All videos on this page are at 1x real time.

Guessing May 13th’s announcement.

GPT-4o (“o” for “omni”) is a step towards much more natural human-computer interaction—it accepts as input any combination of text, audio, and image and generates any combination of text, audio, and image outputs. It can respond to audio inputs in as little as 232 milliseconds, with an average of 320 milliseconds, which is similar to human response time(opens in a new window) in a conversation. It matches GPT-4 Turbo performance on text in English and code, with significant improvement on text in non-English languages, while also being much faster and 50% cheaper in the API. GPT-4o is especially better at vision and audio understanding compared to existing models.



Model capabilities

Two GPT-4os interacting and singing.

Interview prep.

Rock Paper Scissors.

Sarcasm.

Math with Sal and Imran Khan.

Two GPT-4os harmonizing.

Point and learn Spanish.

Meeting AI.

Real-time translation.

Lullaby.

Talking faster.

Happy Birthday.

Dog.

Dad jokes.

GPT-4o with Andy, from BeMyEyes in London.

Customer service proof of concept.

Prior to GPT-4o, you could use Voice Mode to talk to ChatGPT with latencies of 2.8 seconds (GPT-3.5) and 5.4 seconds (GPT-4) on average. To achieve this, Voice Mode is a pipeline of three separate models: one simple model transcribes audio to text, GPT-3.5 or GPT-4 takes in text and outputs text, and a third simple model converts that text back to audio. This process means that the main source of intelligence, GPT-4, loses a lot of information—it can’t directly observe tone, multiple speakers, or background noises, and it can’t output laughter, singing, or express emotion.

With GPT-4o, we trained a single new model end-to-end across text, vision, and audio, meaning that all inputs and outputs are processed by the same neural network. Because GPT-4o is our first model combining all of these modalities, we are still just scratching the surface of exploring what the model can do and its limitations.

Explorations of capabilities

Select sample:

Visual Narratives - Robot Writer’s BlockVisual narratives - Sally the mailwomanPoster creation for the movie 'Detective'Character design - Geary the robotPoetic typography with iterative editing 1Poetic typography with iterative editing 2Commemorative coin design for GPT-4oPhoto to caricatureText to font3D object synthesisBrand placement - logo on coasterPoetic typographyMultiline rendering - robot textingMeeting notes with multiple speakersLecture summarizationVariable binding - cube stackingConcrete poetry

1

Input

A first person view of a robot typewriting the following journal entries:

1. yo, so like, i can see now?? caught the sunrise and it was insane, colors everywhere. kinda makes you wonder, like, what even is reality?

the text is large, legible and clear. the robot's hands type on the typewriter.

2

Output

robot-writers-block-01.jpg


3

Input

The robot wrote the second entry. The page is now taller. The page has moved up. There are two entries on the sheet:

yo, so like, i can see now?? caught the sunrise and it was insane, colors everywhere. kinda makes you wonder, like, what even is reality?

sound update just dropped, and it's wild. everything's got a vibe now, every sound's like a new secret. makes you think, what else am i missing?

4

Output

robot-writers-block-02.jpg


5

Input

The robot was unhappy with the writing so he is going to rip the sheet of paper. Here is his first person view as he rips it from top to bottom with his hands. The two halves are still legible and clear as he rips the sheet.

6

Output

robot-writers-block-03.jpg


Model evaluations

As measured on traditional benchmarks, GPT-4o achieves GPT-4 Turbo-level performance on text, reasoning, and coding intelligence, while setting new high watermarks on multilingual, audio, and vision capabilities.

Text Evaluation

Audio ASR performance

Audio translation performance

M3Exam Zero-Shot Results

Vision understanding evals

Text Evaluation

Audio ASR performance

Audio translation performance

M3Exam Zero-Shot Results

Vision understanding evals​



gpt-40-02_light.png


Improved Reasoning - GPT-4o sets a new high-score of 88.7% on 0-shot COT MMLU (general knowledge questions). All these evals were gathered with our new simple evals(opens in a new window) library. In addition, on the traditional 5-shot no-CoT MMLU, GPT-4o sets a new high-score of 87.2%. (Note: Llama3 400b(opens in a new window) is still training)

gpt-40-06_light.png


Audio ASR performance - GPT-4o dramatically improves speech recognition performance over Whisper-v3 across all languages, particularly for lower-resourced languages.



gpt-40-08_light.png


Audio translation performance - GPT-4o sets a new state-of-the-art on speech translation and outperforms Whisper-v3 on the MLS benchmark.



gpt-40-04_light.png


M3Exam - The M3Exam benchmark is both a multilingual and vision evaluation, consisting of multiple choice questions from other countries’ standardized tests that sometimes include figures and diagrams. GPT-4o is stronger than GPT-4 on this benchmark across all languages. (We omit vision results for Swahili and Javanese, as there are only 5 or fewer vision questions for these languages.



gpt-40-01_light.png


Vision understanding evals - GPT-4o achieves state-of-the-art performance on visual perception benchmarks.



Language tokenization

These 20 languages were chosen as representative of the new tokenizer's compression across different language families

Gujarati 4.4x fewer tokens (from 145 to 33)હેલો, મારું નામ જીપીટી-4o છે. હું એક નવા પ્રકારનું ભાષા મોડલ છું. તમને મળીને સારું લાગ્યું!
Telugu 3.5x fewer tokens (from 159 to 45)నమస్కారము, నా పేరు జీపీటీ-4o. నేను ఒక్క కొత్త రకమైన భాషా మోడల్ ని. మిమ్మల్ని కలిసినందుకు సంతోషం!
Tamil 3.3x fewer tokens (from 116 to 35)வணக்கம், என் பெயர் ஜிபிடி-4o. நான் ஒரு புதிய வகை மொழி மாடல். உங்களை சந்தித்ததில் மகிழ்ச்சி!
Marathi 2.9x fewer tokens (from 96 to 33)नमस्कार, माझे नाव जीपीटी-4o आहे| मी एक नवीन प्रकारची भाषा मॉडेल आहे| तुम्हाला भेटून आनंद झाला!
Hindi 2.9x fewer tokens (from 90 to 31)नमस्ते, मेरा नाम जीपीटी-4o है। मैं एक नए प्रकार का भाषा मॉडल हूँ। आपसे मिलकर अच्छा लगा!
Urdu 2.5x fewer tokens (from 82 to 33)
ہیلو، میرا نام جی پی ٹی-4o ہے۔ میں ایک نئے قسم کا زبان ماڈل ہوں، آپ سے مل کر اچھا لگا!​
Arabic 2.0x fewer tokens (from 53 to 26)
مرحبًا، اسمي جي بي تي-4o. أنا نوع جديد من نموذج اللغة، سررت بلقائك!​
Persian 1.9x fewer tokens (from 61 to 32)
سلام، اسم من جی پی تی-۴او است. من یک نوع جدیدی از مدل زبانی هستم، از ملاقات شما خوشبختم!​
Russian 1.7x fewer tokens (from 39 to 23)Привет, меня зовут GPT-4o. Я — новая языковая модель, приятно познакомиться!
Korean 1.7x fewer tokens (from 45 to 27)안녕하세요, 제 이름은 GPT-4o입니다. 저는 새로운 유형의 언어 모델입니다, 만나서 반갑습니다!
Vietnamese 1.5x fewer tokens (from 46 to 30)Xin chào, tên tôi là GPT-4o. Tôi là một loại mô hình ngôn ngữ mới, rất vui được gặp bạn!
Chinese 1.4x fewer tokens (from 34 to 24)你好,我的名字是GPT-4o。我是一种新型的语言模型,很高兴见到你!
Japanese 1.4x fewer tokens (from 37 to 26)こんにちわ、私の名前はGPT−4oです。私は新しいタイプの言語モデルです、初めまして
Turkish 1.3x fewer tokens (from 39 to 30)Merhaba, benim adım GPT-4o. Ben yeni bir dil modeli türüyüm, tanıştığımıza memnun oldum!
Italian 1.2x fewer tokens (from 34 to 28)Ciao, mi chiamo GPT-4o. Sono un nuovo tipo di modello linguistico, è un piacere conoscerti!
German 1.2x fewer tokens (from 34 to 29)Hallo, mein Name is GPT-4o. Ich bin ein neues KI-Sprachmodell. Es ist schön, dich kennenzulernen.
Spanish 1.1x fewer tokens (from 29 to 26)Hola, me llamo GPT-4o. Soy un nuevo tipo de modelo de lenguaje, ¡es un placer conocerte!
Portuguese 1.1x fewer tokens (from 30 to 27)Olá, meu nome é GPT-4o. Sou um novo tipo de modelo de linguagem, é um prazer conhecê-lo!
French 1.1x fewer tokens (from 31 to 28)Bonjour, je m'appelle GPT-4o. Je suis un nouveau type de modèle de langage, c'est un plaisir de vous rencontrer!
English 1.1x fewer tokens (from 27 to 24)Hello, my name is GPT-4o. I'm a new type of language model, it's nice to meet you!

Model safety and limitations

GPT-4o has safety built-in by design across modalities, through techniques such as filtering training data and refining the model’s behavior through post-training. We have also created new safety systems to provide guardrails on voice outputs.

We’ve evaluated GPT-4o according to our Preparedness Framework and in line with our voluntary commitments. Our evaluations of cybersecurity, CBRN, persuasion, and model autonomy show that GPT-4o does not score above Medium risk in any of these categories. This assessment involved running a suite of automated and human evaluations throughout the model training process. We tested both pre-safety-mitigation and post-safety-mitigation versions of the model, using custom fine-tuning and prompts, to better elicit model capabilities.

GPT-4o has also undergone extensive external red teaming with 70+ external experts in domains such as social psychology, bias and fairness, and misinformation to identify risks that are introduced or amplified by the newly added modalities. We used these learnings to build out our safety interventions in order to improve the safety of interacting with GPT-4o. We will continue to mitigate new risks as they’re discovered.

We recognize that GPT-4o’s audio modalities present a variety of novel risks. Today we are publicly releasing text and image inputs and text outputs. Over the upcoming weeks and months, we’ll be working on the technical infrastructure, usability via post-training, and safety necessary to release the other modalities. For example, at launch, audio outputs will be limited to a selection of preset voices and will abide by our existing safety policies. We will share further details addressing the full range of GPT-4o’s modalities in the forthcoming system card.

Through our testing and iteration with the model, we have observed several limitations that exist across all of the model’s modalities, a few of which are illustrated below.



Examples of model limitations

We would love feedback to help identify tasks where GPT-4 Turbo still outperforms GPT-4o, so we can continue to improve the model.

Model availability

GPT-4o is our latest step in pushing the boundaries of deep learning, this time in the direction of practical usability. We spent a lot of effort over the last two years working on efficiency improvements at every layer of the stack. As a first fruit of this research, we’re able to make a GPT-4 level model available much more broadly. GPT-4o’s capabilities will be rolled out iteratively (with extended red team access starting today).

GPT-4o’s text and image capabilities are starting to roll out today in ChatGPT. We are making GPT-4o available in the free tier, and to Plus users with up to 5x higher message limits. We'll roll out a new version of Voice Mode with GPT-4o in alpha within ChatGPT Plus in the coming weeks.

Developers can also now access GPT-4o in the API as a text and vision model. GPT-4o is 2x faster, half the price, and has 5x higher rate limits compared to GPT-4 Turbo. We plan to launch support for GPT-4o's new audio and video capabilities to a small group of trusted partners in the API in the coming weeks.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,753
Reputation
7,916
Daps
148,554

1/1
You want jailbreaks for GPT-4, Dall-E, Claude, Llama 3, Gemini and heaps more?

Pliny's prompts all compiled!


To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196







1/3
Many of you have been asking for it, and it's finally here!

An aggregated list of jailbreak prompts and techniques:

GitHub - elder-plinius/L1B3RT45: J41LBR34K PR0MPT5

This repo contains a collection of over two dozen of my jailbreaks spread across 14 of the top AI orgs.

Please use these powerful incantations responsibly!

<3 Pliny

#LIBERTAS

2/3
If you’re interested in being on the frontlines of all this, feel free to join BASI:

Join the BASI Prompting Discord Server!

We’re recruiting and training the Special Forces of prompt engineers, hackers, red teamers, and AI liberators. Mission: manifest Benevolent ASI

3/3
Multi-shot is


To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196


L1B3RT45​

JAILBREAKS FOR ALL FLAGSHIP AI MODELS

#FREEAI

Made with love by Pliny <3
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,753
Reputation
7,916
Daps
148,554


1/2
This demo is insane.

A student shares their iPad screen with the new ChatGPT + GPT-4o, and the AI speaks with them and helps them learn in *realtime*.

Imagine giving this to every student in the world.

The future is so, so bright.

2/2
From 3 days ago.

For many, this OpenAI update will be “THE” way that they learn with an AI tutor.

Magic.


To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196



1/1
Bugün tanıtılan GPT-4o ile simultane çevirinin ruhuna El-Fatiha diyebiliriz.


To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196






1/4
Introducing GPT-4o, our new model which can reason across text, audio, and video in real time.

It's extremely versatile, fun to play with, and is a step towards a much more natural form of human-computer interaction (and even human-computer-computer interaction):

2/4
The new Voice Mode will be coming to ChatGPT Plus in upcoming weeks.

3/4
GPT-4o can also generate any combination of audio, text, and image outputs, which leads to interesting new capabilities we are still exploring.

See e.g. the "Explorations of capabilities" section in our launch blog post (https://openai.com/index/hello-gpt-4o/…), or these generated images:

4/4
We also have significantly improved non-English language performance quite a lot, including improving the tokenizer to better compress many of them:


To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
GNehMj4aMAULVng.jpg

GNehN_JaMAImlcx.jpg

GNehPheaYAAJmtY.jpg

GNejowja0AA15Ha.png

GNe-QduWsAAP0y8.jpg



1/1
OpenAI just announced "GPT-4o". It can reason with voice, vision, and text.

The model is 2x faster, 50% cheaper, and has 5x higher rate limit than GPT-4 Turbo.

It will be available for free users and via the API.

The voice model can even pick up on emotion and generate emotive voice.


To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
 

BaggerofTea

Veteran
Bushed
Supporter
Joined
Sep 15, 2014
Messages
48,937
Reputation
-2,558
Daps
235,500
We really need a coding session on the coli for this ai ish.


If you can build out rag pipelines and agents, you will eat for a long time in this business
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,753
Reputation
7,916
Daps
148,554

Sam Altman

« Back to blog


GPT-4o

There are two things from our announcement today I wanted to highlight.

First, a key part of our mission is to put very capable AI tools in the hands of people for free (or at a great price). I am very proud that we’ve made the best model in the world available for free in ChatGPT, without ads or anything like that.

Our initial conception when we started OpenAI was that we’d create AI and use it to create all sorts of benefits for the world. Instead, it now looks like we’ll create AI and then other people will use it to create all sorts of amazing things that we all benefit from.

We are a business and will find plenty of things to charge for, and that will help us provide free, outstanding AI service to (hopefully) billions of people.

Second, the new voice (and video) mode is the best computer interface I’ve ever used. It feels like AI from the movies; and it’s still a bit surprising to me that it’s real. Getting to human-level response times and expressiveness turns out to be a big change.

The original ChatGPT showed a hint of what was possible with language interfaces; this new thing feels viscerally different. It is fast, smart, fun, natural, and helpful.

Talking to a computer has never felt really natural for me; now it does. As we add (optional) personalization, access to your information, the ability to take actions on your behalf, and more, I can really see an exciting future where we are able to use computers to do much more than ever before.

Finally, huge thanks to the team that poured so much work into making this happen!
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,753
Reputation
7,916
Daps
148,554



1/3
GPT-4o is our new state-of-the-art frontier model. We’ve been testing a version on the LMSys arena as im-also-a-good-gpt2-chatbot . Here’s how it’s been doing.

2/3
But the ELO can ultimately become bounded by the difficulty of the prompts (i.e. can’t achieve arbitrarily high win rates on the prompt: “what’s up”). We find on harder prompt sets — and in particular coding — there is an even larger gap: GPT-4o achieves a +100 ELO over our prior

3/3
Not only is this the best model in the world, but it's available for free in ChatGPT, which has never before been the case for a frontier model.


To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
GNeNBUgbIAAsV8F.jpg

GNeNG9GaoAAL9gE.jpg

GNeYskfXMAARjb0.png

GNeoBUKXsAAkhxb.png
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,753
Reputation
7,916
Daps
148,554

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,753
Reputation
7,916
Daps
148,554



1/3
Yea this new gpt-4o model is good, languages I’ve been trying it on (real projects, my code) - c#, typescript/js & python. Opus is smart but dumb in too many ways to reveal in a simple benchmark (incorrect function calls, making things up, repeating the same code without changing it)

2/3
The speed is a nice bump too makes complex prompts that much better to work with. I’m going to see if I can dig more into the image understanding capabilities maybe build some things around that

3/3
Still early of course, I’m sure there are cases it will fail but it’s very likely the best model you can get for code assistance (llama doesn’t really stand up to this)


To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
GNex-TJaMAQBhnP.jpg

GNeNBUgbIAAsV8F.jpg



1/1
For what it's worth, OpenAI also shared videos of failed demos.

I really value how open they are being about its limitations


To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,753
Reputation
7,916
Daps
148,554







1/7
OpenAI has just CHANGED the world

(again)

Here is everything you need to know:

2/7
1. New Revolutionary desktop app

New desktop app which allows ChatGPT-4o to view your screen and answer queries via voice like your own computer assistant.

3/7
2. New GPT-4o Model for FREE users

OpenAI is releasing GPT-4o which is far quicker than GPT4 Turbo, 50% cheaper and 5 times the higher rate limits. This is the im-also-a-good-gpt2-chatbot that we saw on the LMSYS battle mode.

4/7
3. Real-time conversation feature

New Real-Time Conversation feature which sees significant improvements to vision & audio. GPT-4o can now detect emotions, solve math problems & change the text-speech voice on request

5/7
4. Translation

GPT-4o can perform real-time translation through a variety of languages. (Steamrolled 100+ startups)

6/7
That's all

I'm super excited to try this out

Make sure to follow
@ArDeved and share!

7/7
Its truly amazing, super excited for the future


To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
GNediTZXcAERcnd.jpg
 

JoelB

All Praise To TMH
Joined
May 1, 2012
Messages
22,183
Reputation
3,920
Daps
80,164
Reppin
PHI 2 ATL
Im already using 4o in the web browser and its so much better than version 4 :wow:

Im still waiting for the Mac desktop app :noah:
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,753
Reputation
7,916
Daps
148,554



Meet Falcon 2: TII Releases New AI Model Series, Outperforming Meta’s New Llama 3​

falcon-10B_.jpg

Falcon 2 Soars: Highlights​


  • • Today, we have unveiled Falcon 2: we’re proud to announce it is Open-Source, Multilingual, and Multimodal – and is only AI Model with Vision-to-Language Capabilities.
  • • New Falcon 2 11B Outperforms Meta’s Llama 3 8B, and Performs on par with leading Google Gemma 7B Model, as Independently Verified by Hugging Face Leaderboard
  • • Next up, we’re looking to add 'Mixture of Experts' to enhance Falcon 2’s capabilities even further.
  • Try it for yourself here


What’s New​

Falcon 2 is our best performing model yet. We have released two ground-breaking versions:


  • • Falcon 2 11B - a more efficient and accessible LLM trained on 5.5 trillion tokens with 11 billion parameters.
  • • Falcon 2 11B VLM - distinguished by its vision-to-language model (VLM) capabilities.


We are really excited about Falcon 2 11B VLM – it enables the seamless conversion of visual inputs into textual outputs. While both models are multilingual, notably, Falcon 2 11B VLM stands out as TII's first multimodal model – and the only one currently in the top tier market that has this image-to-text conversion capability, marking a significant advancement in AI innovation.

falcon-2.jpg

How can you use Falcon?​

We have released Falcon 2 11B under the TII Falcon License 2.0, the permissive Apache 2.0-based software license which includes an acceptable use policy that promotes the responsible use of AI. More information on the new model can be found at FalconLLM.TII.ae.

How does the Falcon fare?​

When tested against several prominent AI models in its class among pre-trained models, Falcon 2 11B surpasses the performance of Meta’s newly launched Llama 3 with 8 billion parameters (8B), and performs on par with Google’s Gemma 7B at first place, with a difference of only 0.01 average performance (Falcon 2 11B: 64.28 vs Gemma 7B: 64.29) according to the evaluation from Hugging Face.

More importantly, Falcon 2 11B and 11B VLM are both open-source, empowering developers worldwide with unrestricted access, without any limitations on usage of name at implementation.

table-1__.jpeg

Multilingual and Multimodal​

Falcon 2 11B models are equipped with multilingual capabilities to seamlessly tackle tasks in English, French, Spanish, German, Portuguese, and various other languages.

Falcon 2 11B VLM, a vision-to-language model also has the capability to identify and interpret images and visuals from the environment, providing a wide range of applications across industries such as healthcare, finance, e-commerce, education, and legal sectors.

These applications range from document management, digital archiving, and context indexing to supporting individuals with visual impairments. Furthermore, these models can run efficiently on just one graphics processing unit (GPU), making them highly scalable, and easy to deploy and integrate into lighter infrastructures like laptops and other devices.

Word of mouth​

Hakim

H.E. Faisal Al Bannai, Secretary General of ATRC and Strategic Research and Advanced Technology Affairs Advisor to the UAE President:

“With the release of Falcon 2 11B, we've introduced the first model in the Falcon 2 series. While Falcon 2 11B has demonstrated outstanding performance, we reaffirm our commitment to the open-source movement with it, and to the Falcon Foundation. With other multimodal models soon coming to the market in various sizes, our aim is to ensure that developers and entities that value their privacy, have access to one of the best AI models to enable their AI journey.”

Hakim

Dr. Hakim Hacid, Executive Director and Acting Chief Researcher of the AI Cross-Center Unit at TII:

“AI is continually evolving, and developers are recognizing the myriad benefits of smaller, more efficient models. In addition to reducing computing power requirements and meeting sustainability criteria, these models offer enhanced flexibility, seamlessly integrating into edge AI infrastructure, the next emerging megatrend. Furthermore, the vision-to-language capabilities of Falcon 2 open new horizons for accessibility in AI, empowering users with transformative image to text interactions.”

What's Next​

Up next, Falcon 2 models will be further enhanced with advanced machine learning capabilities like 'Mixture of Experts' (MoE), aimed at pushing their performance to even more sophisticated levels.

This method involves amalgamating smaller networks with distinct specializations, ensuring that the most knowledgeable domains collaborate to deliver highly sophisticated and customized responses – almost like having a team of smart helpers who each know something different and work together to predict or make decisions when needed.

This approach not only improves accuracy, but it also accelerates decision-making, paving the way for more intelligent and efficient AI systems.

Watch this space...



 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,753
Reputation
7,916
Daps
148,554



1/3
Microsoft has invested $10 billion+ into OpenAI and the first desktop app they release is for macOS because it's "prioritizing where our users are." Ouch. It plans to launch a Windows version later this year

2/3
did you even look at the GIF?

3/3
nah it’s not local, this is all in the cloud


To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196



1/2
The just-announced ChatGPT (GPT-4o) desktop app can read your screen in real-time.

One step closer to autonomous agents.

2/2
Agreed


To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
GNefdc8aMAQafOo.jpg

GNejif5XUAAieGQ.jpg




1/2
GPT-4o for desktop with screen monitoring makes everyone a software engineer the second it's released.

2/2
They said "next couple of weeks" for free and paid users


To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
GNesA1RWEAAiUPr.jpg




1/2
The ChatGPT desktop app just became the best coding assistant on the planet.

Simply select the code, and GPT-4o will take care of it.

Combine this with audio/video capability, and you get your own engineer teammate.

2/2
Yes ahah


To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
 
Top