Micky Mikey

Veteran
Supporter
Joined
Sep 27, 2013
Messages
15,999
Reputation
2,972
Daps
89,254
9d1e37914b558bb7f01c73489fbdfb4f.gif


it's practically at toddler stage maybe even infancy. have you see the research papers put out to improve the underlying tech. still have a lot of room to improve inference speed, context window size along with accuracy. multi-modality , input token size, reasoning etc.



they dropping all these llms for free cause they know it's just the beginning.:banderas:

There seems to be a new research paper every day but is it being applied? It would seem Google would have incorporated all of these new findings in their new model. Their multi-modality is nice but it's just a small step above ChatGPT4. I'm being a bit pessimistic to temper my expectations. But there are reasons to believe that any new model will not be as revolutionary as GPT4 was to GPT3. I recall Sam Altman saying there is diminishing returns on scaling up model size. Even Bill Gates seems to be thinking along those same lines.



I'm not sure about Open Source models but I can't imagine them catching up to ChatGPT4 or Gemini much less surpassing them.
And yes I saw that post on the Singularity Reddit. I'll believe it when I see it. As with Crypto there are a lot of Hypemen in this space who're constantly over promising and under-delivering.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
57,369
Reputation
8,499
Daps
160,085
There seems to be a new research paper every day but is it being applied? It would seem Google would have incorporated all of these new findings in their new model. Their multi-modality is nice but it's just a small step above ChatGPT4. I'm being a bit pessimistic to temper my expectations. But there are reasons to believe that any new model will not be as revolutionary as GPT4 was to GPT3. I recall Sam Altman saying there is diminishing returns on scaling up model size. Even Bill Gates seems to be thinking along those same lines.



I'm not sure about Open Source models but I can't imagine them catching up to ChatGPT4 or Gemini much less surpassing them.
And yes I saw that post on the Singularity Reddit. I'll believe it when I see it. As with Crypto there are a lot of Hypemen in this space who're constantly over promising and under-delivering.


in totality, some models have not surpassed GPT-4 but it's already been proven that fine-tuned models can absolutely surpass gpt-4 in specific domains.

 

bnew

Veteran
Joined
Nov 1, 2015
Messages
57,369
Reputation
8,499
Daps
160,085

Computer Science > Computer Vision and Pattern Recognition​

[Submitted on 9 Nov 2023 (v1), last revised 13 Nov 2023 (this version, v2)]

Mirasol3B: A Multimodal Autoregressive model for time-aligned and contextual modalities​

AJ Piergiovanni, Isaac Noble, Dahun Kim, Michael S. Ryoo, Victor Gomes, Anelia Angelova
One of the main challenges of multimodal learning is the need to combine heterogeneous modalities (e.g., video, audio, text). For example, video and audio are obtained at much higher rates than text and are roughly aligned in time. They are often not synchronized with text, which comes as a global context, e.g., a title, or a description. Furthermore, video and audio inputs are of much larger volumes, and grow as the video length increases, which naturally requires more compute dedicated to these modalities and makes modeling of long-range dependencies harder.
We here decouple the multimodal modeling, dividing it into separate, focused autoregressive models, processing the inputs according to the characteristics of the modalities. We propose a multimodal model, called Mirasol3B, consisting of an autoregressive component for the time-synchronized modalities (audio and video), and an autoregressive component for the context modalities which are not necessarily aligned in time but are still sequential. To address the long-sequences of the video-audio inputs, we propose to further partition the video and audio sequences in consecutive snippets and autoregressively process their representations. To that end, we propose a Combiner mechanism, which models the audio-video information jointly within a timeframe. The Combiner learns to extract audio and video features from raw spatio-temporal signals, and then learns to fuse these features producing compact but expressive representations per snippet.
Our approach achieves the state-of-the-art on well established multimodal benchmarks, outperforming much larger models. It effectively addresses the high computational demand of media inputs by both learning compact representations, controlling the sequence length of the audio-video feature representations, and modeling their dependencies in time.
Subjects:Computer Vision and Pattern Recognition (cs.CV)
Cite as:arXiv:2311.05698 [cs.CV]
(or arXiv:2311.05698v2 [cs.CV] for this version)
[2311.05698] Mirasol3B: A Multimodal Autoregressive model for time-aligned and contextual modalities
Focus to learn more

Submission history​

From: Aj Piergiovanni [view email]
[v1] Thu, 9 Nov 2023 19:15:12 UTC (949 KB)
[v2] Mon, 13 Nov 2023 14:53:10 UTC (949 KB)




 

bnew

Veteran
Joined
Nov 1, 2015
Messages
57,369
Reputation
8,499
Daps
160,085

Meta’s new AI image generator was trained on 1.1 billion Instagram and Facebook photos

"Imagine with Meta AI" turns prompts into images, trained using public Facebook data.

BENJ EDWARDS - 12/6/2023, 4:52 PM

Three images generated by

Enlarge / Three images generated by "Imagine with Meta AI" using the Emu AI model.

Meta | Benj Edwards

101

On Wednesday, Meta released a free standalone AI image-generator website, "Imagine with Meta AI," based on its Emu image-synthesis model. Meta used 1.1 billion publicly visible Facebook and Instagram images to train the AI model, which can render a novel image from a written prompt. Previously, Meta's version of this technology—using the same data—was only available in messaging and social networking apps such as Instagram.



FURTHER READING

Users find that Facebook’s new AI stickers can generate Elmo with a knife

If you're on Facebook or Instagram, it's quite possible a picture of you (or that you took) helped train Emu. In a way, the old saying, "If you're not paying for it, you are the product" has taken on a whole new meaning. Although, as of 2016, Instagram users uploaded over 95 million photos a day, so the dataset Meta used to train its AI model was a small subset of its overall photo library.

Since Meta says it only uses publicly available photos for training, setting your photos private on Instagram or Facebook should prevent their inclusion in the company's future AI model training (unless it changes that policy, of course).

Imagine with Meta AI



Previous SlideNext Slide

  • AI-generated images of a muscular barbarian with weapons beside a CRT television set, cinematic, 8K, studio lighting created by Meta Emu on the Imagine with Meta AI website.
    AI-generated images of "a muscular barbarian with weapons beside a CRT television set, cinematic, 8K, studio lighting" created by Meta Emu on the "Imagine with Meta AI" website.
    Meta | Benj Edwards

  • AI-generated images of a cat in a car holding a can of beer created by Meta Emu on the Imagine with Meta AI website.
    AI-generated images of "a cat in a car holding a can of beer" created by Meta Emu on the "Imagine with Meta AI" website.
    Meta | Benj Edwards
  • AI-generated images of a flaming cheeseburger created by Meta Emu on the Imagine with Meta AI website.
    AI-generated images of "a flaming cheeseburger" created by Meta Emu on the "Imagine with Meta AI" website.
    Meta | Benj Edwards
  • AI-generated images of a photorealistic Mickey Mouse on the moon in a spacesuit created by Meta Emu on the Imagine with Meta AI website.
    AI-generated images of "a photorealistic Mickey Mouse on the moon in a spacesuit" created by Meta Emu on the "Imagine with Meta AI" website.
    Meta | Benj Edwards
  • AI-generated images of a handsome man created by Meta Emu on the Imagine with Meta AI website.
    AI-generated images of "a handsome man" created by Meta Emu on the "Imagine with Meta AI" website.
    Meta | Benj Edwards
  • AI-generated images of the ultimate gaming PC with 1,000 RGB lights created by Meta Emu on the Imagine with Meta AI website.
    AI-generated images of "the ultimate gaming PC with 1,000 RGB lights" created by Meta Emu on the "Imagine with Meta AI" website.
    Meta | Benj Edwards
  • AI-generated images of a man holding a sign that says 'Ars Technica' created by Meta Emu on the Imagine with Meta AI website.'Ars Technica' created by Meta Emu on the Imagine with Meta AI website.
    AI-generated images of "a man holding a sign that says 'Ars Technica'" created by Meta Emu on the "Imagine with Meta AI" website.
    Meta | Benj Edwards
  • AI-generated images of a complex prompt involving Christmas stockings and a cave created by Meta Emu on the Imagine with Meta AI website.
    AI-generated images of a complex prompt involving Christmas stockings and a cave created by Meta Emu on the "Imagine with Meta AI" website.
    Meta | Benj Edwards
  • AI-generated images of photorealistic vintage computer collector nerd in a computer lab, bright psychedelic technicolor swirls created by Meta Emu on the Imagine with Meta AI website.
    AI-generated images of "photorealistic vintage computer collector nerd in a computer lab, bright psychedelic technicolor swirls" created by Meta Emu on the "Imagine with Meta AI" website.
    Meta | Benj Edwards
  • AI-generated images of an embroidered Santa Claus created by Meta Emu on the Imagine with Meta AI website.
    AI-generated images of "an embroidered Santa Claus" created by Meta Emu on the "Imagine with Meta AI" website.
    Meta | Benj Edwards
  • AI-generated images of A teddy bear on a skateboard created by Meta Emu on the Imagine with Meta AI website.
    AI-generated images of "A teddy bear on a skateboard" created by Meta Emu on the "Imagine with Meta AI" website.
    Meta | Benj Edwards
  • AI-generated images of a beautiful queen of the universe created by Meta Emu on the Imagine with Meta AI website.
    AI-generated images of "a beautiful queen of the universe" created by Meta Emu on the "Imagine with Meta AI" website.
    Meta | Benj Edwards
    [/LEFT]
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
57,369
Reputation
8,499
Daps
160,085










Relightable Gaussian Codec Avatars​

Published on Dec 6
·Featured in Daily Papers on Dec 6
Authors:
Shunsuke Saito, Gabriel Schwartz, Tomas Simon, Junxuan Li, Giljoo Nam

Abstract​

The fidelity of relighting is bounded by both geometry and appearance representations. For geometry, both mesh and volumetric approaches have difficulty modeling intricate structures like 3D hair geometry. For appearance, existing relighting models are limited in fidelity and often too slow to render in real-time with high-resolution continuous environments. In this work, we present Relightable Gaussian Codec Avatars, a method to build high-fidelity relightable head avatars that can be animated to generate novel expressions. Our geometry model based on 3D Gaussians can capture 3D-consistent sub-millimeter details such as hair strands and pores on dynamic face sequences. To support diverse materials of human heads such as the eyes, skin, and hair in a unified manner, we present a novel relightable appearance model based on learnable radiance transfer. Together with global illumination-aware spherical harmonics for the diffuse components, we achieve real-time relighting with spatially all-frequency reflections using spherical Gaussians. This appearance model can be efficiently relit under both point light and continuous illumination. We further improve the fidelity of eye reflections and enable explicit gaze control by introducing relightable explicit eye models. Our method outperforms existing approaches without compromising real-time performance. We also demonstrate real-time relighting of avatars on a tethered consumer VR headset, showcasing the efficiency and fidelity of our avatars.
View arXiv pageView PDFAdd to collection

 
Last edited:

bnew

Veteran
Joined
Nov 1, 2015
Messages
57,369
Reputation
8,499
Daps
160,085


Google’s Demo For ChatGPT Rival Criticized By Some Employees

Google’s newly unveiled AI technology doesn’t work quite as well as some people assumed, but the company says more updates are coming in the new year. [/SIZE]
1200x799.jpg

Sundar Pichai during the Google I/O Developers Conference in Mountain View, California, in May. Photographer: David Paul Morris/Bloomberg

By Shirin Ghaffary and Davey Alba
December 7, 2023 at 5:04 PM EST

Google stunned viewers this week with a video demo for its new ChatGPT rival. In one case, however, the technology doesn’t work quite as well as people assumed. But first…

Three things to know:

• EU representatives remain divided on AI rules after nearly 24 hours of debate

• A popular provider of AI-powered drive-thru systems relies heavily on humans to review orders

• Air Space Intelligence, an AI startup for aerospace, was valued at about $300 million in new financing

Google’s duck problem

When Google unveiled Gemini, its long-awaited answer to ChatGPT, perhaps the most jaw-dropping use case involved a duck. In a pre-recorded video demo shared on Wednesday, a disembodied hand is shown drawing the animal. The AI system appears to analyze it in real-time as it’s drawn and responds with a human-sounding voice in conversation with the user.


Google CEO Sundar Pichai promoted the video on X, writing, “Best way to understand Gemini’s underlying amazing capabilities is to see them in action, take a look ⬇️.” Others on X said the demo was “mind-blowing” and “unreal.”

But the technology doesn’t quite work as well as people assumed, as many were quick to point out. Right now, Gemini doesn't say its responses out loud, and you probably can’t expect its responses to be as polished as it appears to be in the video. Some Google employees are also calling out those discrepancies internally.

One Googler told Bloomberg that, in their view, the video paints an unrealistic picture of how easy it is to coax impressive results out of Gemini. Another staffer said they weren’t too surprised by the demo because they’re used to some level of marketing hype in how the company publicly positions its products. (Of course, all companies do.) "I think most employees who've played with any LLM technology know to take all of this with a grain of salt,” said the employee, referring to the acronym for large language models, which power AI chatbots. These people asked not to be identified for fear of professional repercussions.

“Our Hands on with Gemini demo video shows real outputs from Gemini. We created the demo by capturing footage in order to test Gemini’s capabilities on a wide range of challenges,” Google said in a statement. “Then we prompted Gemini using still image frames from the footage, and prompting via text.”

To its credit, Google did disclose that what’s shown in the video is not exactly how Gemini works in practice. “For the purposes of this demo, latency has been reduced and Gemini’s outputs have been shortened for brevity,” a description of the demo uploaded to YouTube reads. In other words, the video shows a shorter version of Gemini’s original responses and the AI system took longer to come up with them. Google told Bloomberg that individual words in Gemini’s responses were not changed and the voiceover captured excerpts from actual text prompting of Gemini.

Eli Collins, vice-president of product at Google DeepMind, told Bloomberg the duck-drawing demo was still a research-level capability and not in Google’s actual products, at least for now.

Gemini, released Wednesday, is the result of Google working throughout this year to catch up to OpenAI’s ChatGPT and regain its position as the undisputed leader in the AI industry. But the duck demo highlights the gap between the promise of Google’s AI technology and what users can experience right now.

Google said Gemini is its largest, most capable and flexible AI model to date, replacing PaLM 2, released in May. The company said Gemini exceeds leading AI models in 30 out of 32 benchmarks testing for reasoning, math, language and other metrics. It specifically beats GPT-4, one of OpenAI’s most recent AI models, in seven out of eight of those benchmarks, according to Google, although a few of those margins are slim. Gemini is also multimodal, which means it can understand video, images and code, setting it apart from GPT-4 which can only input images and text.

“It’s a new era for us,” Collins said in an interview after the event. “We’re breaking ground from a research perspective. This is V1. It’s just the beginning.”

Google is releasing Gemini in a tiered rollout. Gemini Ultra, the most capable version and the one that the company says outperforms GPT-4 in most tests, won’t be released until early next year. Other features, like those demoed in the duck video, remain in development.

Internally, some Googlers have been discussing whether showing the video without a prominent disclosure could be misleading to the public. In a corporate company-wide forum, one Googler shared a meme implying the duck video was deceptively edited. Another meme showed a cartoon of Homer Simpson proudly standing upright in his underwear, with the caption: “Gemini demo prompts.” It was contrasted with a less flattering picture of Homer in the same position from behind, with his loose skin bunched up. The caption: “the real prompts.”

Another Googler said in a comment, “I guess the video creators valued the ‘storytelling’ aspect more.”



ChatGPT vs. Gemini: Hands on

For now, users can play around with the medium tier version of Gemini in Google’s free chatbot, Bard. The company said this iteration outperformed the comparable version of OpenAI’s GPT model (GPT 3.5) in six out of eight industry benchmark tests.[/SIZE]

In our own limited testing with the new-and-revamped Bard, we found it mostly to be on par or better than ChatGPT 3.5, and in some ways better than the old Bard. However, it’s still unreliable on some tasks.

Out of seven SAT math and reasoning questions we prompted Bard with, it correctly answered four, incorrectly answered two and said it didn’t know the answer for one. It also answered one out of three reading comprehension questions correctly. When we tested GPT 3.5, it yielded similar results, but it was able to answer one question that stumped Bard.

Bard, like all large-language models, still hallucinates or provides incorrect information at times. When we asked Bard, for example, to name what AI model it runs on, it incorrectly told me PaLM2, the previous version it used.

On some planning-oriented tasks, Bard’s capabilities did seem like a clear improvement over the previous iteration of the product and compared to ChatGPT. When asked to plan a girls’ trip to Miami, for example, Bard gave me a useful day-by-day breakdown separated into morning, afternoon and evening itineraries. For the first day, it started with a “delicious Cuban breakfast” at a local restaurant, a boat tour of Biscayne Bay and a night out in South Beach. When I gave the same prompt to ChatGPT 3.5, the answers were longer and less specific.

To test out Bard’s creativity, we asked it to write a poem about the recent boardroom chaos at OpenAI. It came up with some brooding lines, including: “OpenAI, in turmoil's grip/Saw visions shattered, alliances split.” GPT 3.5’s poem didn’t quite capture the mood as well because it only has access to online information through early 2022“ Those paying for ChatGPT 4, however, can get real-time information, and its poetry was more on topic: “Sam Altman, a name, in headlines cast/A leader in question, a future vast.”

In our interview, DeepMind’s Collins said Bard is “one of the best free chatbots” in the world now with the Gemini upgrade. Based on our limited testing, he may be right.

Got a question about AI? Email me, Shirin Ghaffary, and I'll try to answer yours in a future edition of this newsletter.
 

Trav

Marathon Mentality 🏁
Supporter
Joined
May 26, 2012
Messages
30,289
Reputation
5,609
Daps
84,107
Reppin
TMC 8-24


But u still chargin' niccas tho :mjlol:

Both it and Bard been on some mf bullshyt for me as far as coding and stuff is concerned. Like not even asking it to complete an entire project but even just some miniscule ish like "what does this mean" and their is a concrete answer but it's spitting out a different answer every 3 inquiries or Bard going as far as to have multiple answers drafts for something that shouldn't even be a question.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
57,369
Reputation
8,499
Daps
160,085


But u still chargin' niccas tho :mjlol:

Both it and Bard been on some mf bullshyt for me as far as coding and stuff is concerned. Like not even asking it to complete an entire project but even just some miniscule ish like "what does this mean" and their is a concrete answer but it's spitting out a different answer every 3 inquiries or Bard going as far as to have multiple answers drafts for something that shouldn't even be a question.


nah more AI chat model services should adopt bards 3 draft approach. one of the best features of the service.

bullshyt responses like what?
 

Trav

Marathon Mentality 🏁
Supporter
Joined
May 26, 2012
Messages
30,289
Reputation
5,609
Daps
84,107
Reppin
TMC 8-24
nah more AI chat model services should adopt bards 3 draft approach. one of the best features of the service.

bullshyt responses like what?

I'm cool with it if you're asking it something like "create a bedtime story for my kid and here are some personal details you can use" and it gives you three drafts for that. Or you ask "how can you use a certain feature of a program to improve the animation of xyz" and it gives you three drafts for that.

But if I need a concrete answer for say, an assessment or something and there is truly only one answer, then the logic behind it's three drafts is fukked cuz it's programmed to come up with them no matter what. Like, I might hypothetically ask Bard if I'm using a certain operator or function within a particular code to join to another table in a relational database and in the context I give it there is absolutely only one answer but it spits back 3 anyway, you're fukking me.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
57,369
Reputation
8,499
Daps
160,085













Mistral "Mixtral" 8x7B 32k model

magnet:?xt=urn:btih:5546272da9065eddeb6fcd7ffddeef5b75be79a7&dn=mixtral-8x7b-32kseqlen&tr=udp%3A%2F%[URL='https://t.co/uV4WVdtpwZ']http://2Fopentracker.i2p.rocks[/URL]%3A6969%2Fannounce&tr=http%3A%2F%[URL='https://t.co/g0m9cEUz0T']http://2Ftracker.openbittorrent.com[/URL]%3A80%2Fannounce

 
Last edited:

bnew

Veteran
Joined
Nov 1, 2015
Messages
57,369
Reputation
8,499
Daps
160,085

Computer Science > Computation and Language​

[Submitted on 1 Dec 2023]

Instruction-tuning Aligns LLMs to the Human Brain​

Khai Loong Aw, Syrielle Montariol, Badr AlKhamissi, Martin Schrimpf, Antoine Bosselut
Instruction-tuning is a widely adopted method of finetuning that enables large language models (LLMs) to generate output that more closely resembles human responses to natural language queries, in many cases leading to human-level performance on diverse testbeds. However, it remains unclear whether instruction-tuning truly makes LLMs more similar to how humans process language. We investigate the effect of instruction-tuning on LLM-human similarity in two ways: (1) brain alignment, the similarity of LLM internal representations to neural activity in the human language system, and (2) behavioral alignment, the similarity of LLM and human behavior on a reading task. We assess 25 vanilla and instruction-tuned LLMs across three datasets involving humans reading naturalistic stories and sentences. We discover that instruction-tuning generally enhances brain alignment by an average of 6%, but does not have a similar effect on behavioral alignment. To identify the factors underlying LLM-brain alignment, we compute correlations between the brain alignment of LLMs and various model properties, such as model size, various problem-solving abilities, and performance on tasks requiring world knowledge spanning various domains. Notably, we find a strong positive correlation between brain alignment and model size (r = 0.95), as well as performance on tasks requiring world knowledge (r = 0.81). Our results demonstrate that instruction-tuning LLMs improves both world knowledge representations and brain alignment, suggesting that mechanisms that encode world knowledge in LLMs also improve representational alignment to the human brain.
Subjects:Computation and Language (cs.CL)
Cite as:arXiv:2312.00575 [cs.CL]
(or arXiv:2312.00575v1 [cs.CL] for this version)
https://doi.org/10.48550/arXiv.2312.00575
Focus to learn more

Submission history​

From: Khai Loong Aw [view email]
[v1] Fri, 1 Dec 2023 13:31:02 UTC (2,029 KB)


 

bnew

Veteran
Joined
Nov 1, 2015
Messages
57,369
Reputation
8,499
Daps
160,085

1Vjyvst.png


Model Card for una-cybertron-7b-v2-bf16 (UNA: Uniform Neural Alignment)​

We strike back, introducing Cybertron 7B v2 a 7B MistralAI based model, best on it's series. Trained on SFT, DPO and UNA (Unified Neural Alignment) on multiple datasets. He scores EXACTLY #1 with 69.67+ score on HF LeaderBoard board, #8 ALL SIZES top score.

  • v1 Scoring #1 at 2 December 2023 with 69.43 ..few models were releasse .. but only 1 can survive: CYBERTRON!
  • v2 Scoring #1 at 5 December 2023 with 69.67
ModelAverageARC (25-s)HellaSwag (10-s)MMLU (5-s)TruthfulQA (MC) (0-s)Winogrande (5-s)GSM8K (5-s)
mistralai/Mistral-7B-v0.160.9759.9883.3164.1642.1578.3737.83
Intel/neural-chat-7b-v3-268.2967.4983.9263.5559.6879.9555.12
perlthoughts/Chupacabra-7B-v263.5466.4785.1764.4957.679.1628.35
fblgit/una-cybertron-7b-v1-fp1669.4968.4385.8563.3463.2880.9055.12
fblgit/una-cybertron-7b-v2-bf1669.6768.2685.?463.2364.6381.3755.04
The model excels in mathematics, logic, reasoning, overall very smart. He can make a deep reasoning over the context and prompt, it gives the impression of not missing details around.

Model Details​

Adiestrated with UNA: Uniform Neural Alignment technique (paper going out soon).

  • What is NOT UNA? Its not a merged layers model. Is not SLERP or SLURP or similar.
  • What is UNA? A formula & A technique to TAME models
  • When will be released the code and paper? When have time, contribute and it'll be faster.

Model Description​




lT56Ev0.png



Model Card for una-xaberius-34b-v1-beta (UNA: Uniform Neural Alignment)​

This is another King-Breed from Juanako.AI

Introducing THE MODEL: XABERIUS 34B v1-BETA an experimental 34B LLaMa-Yi-34B based model, best on it's series. Trained on SFT, DPO and UNA (Unified Neural Alignment) on multiple datasets.

Timeline:

  • 05-Dec-2023 v1-beta released
  • 08-Dec-2023 Evaluation been "RUNNING" for 2 days.. no results yet
ModelAverageARC (25-s)HellaSwag (10-s)MMLU (5-s)TruthfulQA (MC) (0-s)Winogrande (5-s)GSM8K (5-s)
fblgit/una-cybertron-7b-v1-fp1669.4968.4385.8563.3463.2880.9055.12
fblgit/una-cybertron-7b-v2-bf1669.6768.2685.?463.2364.6381.3755.04
fblgit/una-xaberius-34b-v1beta74.2170.3986.7279.1361.5580.2667.24

Evaluations​

  • Scores 74.21 Outperforming former leader tigerbot-70b-chat and landing on #1 position of HuggingFace LeaderBoard: 08 December 2023.
  • Scores 79.13 in MMLU, setting a new record not just for 34B but also for all OpenSource LLM's :smile:

Model Details​

Adiestrated with UNA: Uniform Neural Alignment technique (paper going out soon).

  • What is NOT UNA? Its not a merged layers model. Is not SLERP or SLURP or similar.
  • What is UNA? A formula & A technique to TAME models
  • When will be released the code and paper? When have time, contribute and it'll be faster.

Model Description​

 
Top