bnew

Veteran
Joined
Nov 1, 2015
Messages
55,696
Reputation
8,224
Daps
157,231

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,696
Reputation
8,224
Daps
157,231

9w2wadxj91ad1.png

 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,696
Reputation
8,224
Daps
157,231


1/1
Microsoft sneakily updated Phi-3 Mini!
> Significantly increased code understanding in Python, C++, Rust, and Typescript.
> Enhanced post-training for better-structured output.
> Improved multi-turn instruction following.
> Support for <|system|> tag.
> Improved reasoning and long-context understanding.

Updates were done to both 4K and 128K context model checkpoints! GG MSFT for the updates!

Phi-3 just become even better :D


To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
GReALe5XgAA8nKV.jpg




 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,696
Reputation
8,224
Daps
157,231


The moment we stopped understanding AI [AlexNet]​


Shared July 1, 2024

Thanks to KiwiCo for sponsoring today's video! Go to www.kiwico.com/welchlabs and use code WELCHLABS for 50% off your first month of monthly lines and/or for 20% off your first Panda Crate. Activation Atlas Posters! www.welchlabs.com/resources/5gtnaauv6nb9lrhoz9cp60… www.welchlabs.com/resources/activation-atlas-poste… www.welchlabs.com/resources/large-activation-atlas… www.welchlabs.com/resources/activation-atlas-poste… Special thanks to the Patrons: Juan Benet, Ross Hanson, Yan Babitski, AJ Englehardt, Alvin Khaled, Eduardo Barraza, Hitoshi Yamauchi, Jaewon Jung, Mrgoodlight, Shinichi Hayashi, Sid Sarasvati, Dominic Beaumont, Shannon Prater, Ubiquity Ventures, Matias Forti Welch Labs Ad free videos and exclusive perks: www.patreon.com/welchlabs Watch on TikTok: www.tiktok.com/@welchlabs Learn More or Contact: www.welchlabs.com/ Instagram: www.instagram.com/welchlabs X: twitter.com/welchlabs References AlexNet Paper proceedings.neurips.cc/paper_files/paper/2012/file… Original Activation Atlas Article- explore here - Great interactive Atlas! distill.pub/2019/activation-atlas/ Carter, et al., "Activation Atlas", Distill, 2019. Feature Visualization Article: distill.pub/2017/feature-visualization/ `Olah, et al., "Feature Visualization", Distill, 2017.` Great LLM Explainability work: transformer-circuits.pub/2024/scaling-monosemantic… Templeton, et al., "Scaling Monosemanticity: Extracting Interpretable Features from Claude 3 Sonnet", Transformer Circuits Thread, 2024. “Deep Visualization Toolbox" by Jason Yosinski video inspired many visuals: • Deep Visualization Toolbox Great LLM/GPT Intro paper arxiv.org/pdf/2304.10557 3B1Bs GPT Videos are excellent, as always: • Attention in transformers, visually e... • But what is a GPT? Visual intro to t... Andrej Kerpathy's walkthrough is amazing: • Let's build GPT: from scratch, in cod... Goodfellow’s Deep Learning Book www.deeplearningbook.org/ OpenAI’s 10,000 V100 GPU cluster (1+ exaflop) news.microsoft.com/source/features/innovation/open… GPT-3 size, etc: Language Models are Few-Shot Learners, Brown et al, 2020. Unique token count for ChatGPT: cookbook.openai.com/examples/how_to_count_tokens_w… GPT-4 training size etc, speculative: patmcguinness.substack.com/p/gpt-4-details-reveale… www.semianalysis.com/p/gpt-4-architecture-infrastr… Historical Neural Network Videos • Convolutional Network Demo from 1989 • Perceptron Research from the 50's & 6...
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,696
Reputation
8,224
Daps
157,231



Mixture-of-Agents Enhances Large Language Model Capabilities​


Junlin Wang, Jue Wang, Ben Athiwaratkun, Ce Zhang, James Zou

Recent advances in large language models (LLMs) demonstrate substantial capabilities in natural language understanding and generation tasks. With the growing number of LLMs, how to harness the collective expertise of multiple LLMs is an exciting open direction. Toward this goal, we propose a new approach that leverages the collective strengths of multiple LLMs through a Mixture-of-Agents (MoA) methodology. In our approach, we construct a layered MoA architecture wherein each layer comprises multiple LLM agents. Each agent takes all the outputs from agents in the previous layer as auxiliary information in generating its response. MoA models achieves state-of-art performance on AlpacaEval 2.0, MT-Bench and FLASK, surpassing GPT-4 Omni. For example, our MoA using only open-source LLMs is the leader of AlpacaEval 2.0 by a substantial gap, achieving a score of 65.1% compared to 57.5% by GPT-4 Omni.


Subjects: Computation and Language (cs.CL)
Cite as:arXiv:2406.04692 [cs.CL]
(or arXiv:2406.04692v1 [cs.CL] for this version)
[2406.04692] Mixture-of-Agents Enhances Large Language Model Capabilities

Submission history​

From: Jue Wang [view email]
[v1] Fri, 7 Jun 2024 07:04:10 UTC (861 KB)


 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,696
Reputation
8,224
Daps
157,231

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,696
Reputation
8,224
Daps
157,231

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,696
Reputation
8,224
Daps
157,231

1/1
I've just uploaded llamafiles for Google's new Gemma2 language model. This is the 27B model that folks have been saying is better than 70B and 104B models like LLaMA3 and Command-R+. People are even saying it's better than the original GPT4! Now you can run it air-gapped on your own computer, using a single file.

Your ./gemma-2-27b-it.Q6_K.llamafile is 22gb and we put a lot of work into making sure it produces outputs consistent with Google's hosted version. It's great for coding, roleplaying, admiring the quality of Google engineering, and more.

The tradeoff is that license is a bit cray. It's not open source. Like the Cohere license, Gemma's license is about as bad as they come. Its list of unacceptable uses is unacceptably broad. The terms of the license can be changed by Google at any time. They also refer to this as a Gemma Service (which they don't define) and say they reserve the right to "remotely" disable it if you violate the agreement.

It's totally a good look for the one company on Earth that has a backdoor into literally everything I own to remind me they might execute Order 66 with Chrome and hack into my computer to remotely disable the floating point array file I just downloaded.

When Gemma v1 was released back in February, we saw how certain Googlers responsible for RLHF training brought great shame and embarrassment to everyone in AI who cares about safety. Perhaps Google's legal team deserves a closer look too.

In any case, the pros clearly outweigh the cons. I think Gemma2 is worth trying! You can download the 27B llamafiles here: jartine/gemma-2-27b-it-llamafile · Hugging Face I've also posted the tinier 9B llamafiles here: jartine/gemma-2-9b-it-llamafile · Hugging Face Enjoy!


To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
GRfWt8waYAAOQdo.png

GRfb8QdaYAId6NP.jpg
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,696
Reputation
8,224
Daps
157,231

Altrove uses AI models and lab automation to create new materials​

Romain Dillet
12:00 AM PDT • July 3, 2024

Comment

GettyImages-163263590.jpg
Image Credits: bagi1998 / Getty Images

For the past couple of years, innovation has been accelerating in new materials development. And a new French startup called Altrove plans to play a role in this innovation cycle. The deep tech startup has already raised €3.7 million (around $4 million at current exchange rates).

If you’re interested in new materials development, you may have noticed that several teams have shared important breakthroughs with the research community when it comes to materials prediction.

“Historically, over the last 50 years, R&D to find new materials has advanced at a very slow pace,” Altrove co-founder and CEO Thibaud Martin told TechCrunch. There have been several bottlenecks. And an important one has been the starting point — how can you predict if materials made out of a handful of elements can theoretically exist?

When you assemble two different chemical elements, there are tens of thousands of possibilities. When you want to work with three different elements, there are tens of thousands of combinations. With four elements, you get millions of possibilities.

Teams working for DeepMind, Microsoft, Meta or Orbital Materials have been developing artificial intelligence models to overcome calculation constraints and predict new materials that could potentially exist in a stable state. “More stable materials have been predicted in the last nine months than in the previous 49 years,” Martin said.

But solving this bottleneck is just one part of the equation. Knowing that new materials can exist isn’t enough when it comes to making new materials. You have to come up with the recipe.

“A recipe isn’t just about what you put together. It’s also about the proportions, at what temperature, in what order, for how long. So there are lots of factors, lots of variables involved in how you make new materials,” Martin said.

Altrove is focusing on inorganic materials and starting with rare earth elements more specifically. There’s a market opportunity here with rare earth elements because they are hard to source, pricing greatly varies and they often come from China. Many companies try to rely less on China as part of their supply chain to avoid regulatory uncertainties.

Creating an automated iteration loop​

The company doesn’t invent new materials from scratch but it selects interesting candidates out of all the new materials that have been predicted. Altrove then uses its own AI models to generate potential recipes for these materials.

Right now, the company tests these recipes one by one and produces a tiny sample of each material. After that, Altrove has developed a proprietary characterization technology that uses an X-ray diffractometer to understand if the output material performs as expected.

“It sounds trivial but it’s actually very complicated to check what you’ve made and understand why. In most cases, what you’ve made isn’t exactly what you were looking for in the first place,” Martin said.

This is where Altrove shines, as the company’s co-founder and CTO Joonatan Laulainen has a PhD in materials science and is an expert in characterization. The startup owns IP related to characterization.

Learning from the characterization step to improve your recipe is key when it comes to making new materials. That’s why Altrove wants to automate its lab so that it can test more recipes at once and speed up the feedback loop.

“We want to build the first high throughput methodology. In other words, pure prediction only takes you 30% of the way to having a material that can really be used industrially. The other 70% involves iterating in real life. That’s why it’s so important to have an automated lab because you increase the throughput and you can parallelize more experiments,” Martin said.

Altrove defines itself as a hardware-enabled AI company. It thinks it will sell licenses for its newly produced materials or make those materials itself with third-party partners. The company raised €3.7 million in a round led by Contrarian Ventures with Emblem also participating. Several business angels also invested in the startup, such as Thomas Clozel (Owkin CEO), Julien Chaumond (Hugging Face CTO) and Nikolaj Deichmann (3Shape founder).

The startup draws inspiration from biotech companies that have turned to AI to find new drugs and treatments — but this time for new materials. Altrove plans to build its automated lab by the end of the year and sell its first asset within 18 months.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,696
Reputation
8,224
Daps
157,231

Figma disables its AI design feature that appeared to be ripping off Apple’s Weather app​

Sarah Perez

7:48 AM PDT • July 2, 2024

Comment

Dylan Field, CEO at Figma on the TechCrunch Disrupt stage in San Francisco on October 20, 2022. Image Credit: Haje Kamps / TechCrunch
Image Credits: Haje Kamps / TechCrunch

Figma CEO Dylan Field says the company will temporarily disable its “Make Design” AI feature that was said to be ripping off the designs of Apple’s own Weather app. The problem was first spotted by Andy Allen, the founder of NotBoring Software, which makes a suite of apps that includes a popular, skinnable Weather app and other utilities. He found by testing Figma’s tool that it would repeatedly reproduce Apple’s Weather app when used as a design aid.

Allen had taken to X, formerly Twitter, to accuse Figma of “heavily” training its tool on existing apps — an accusation Field now denies.



The Make Design feature is available within Figma’s software and will generate UI (user interface) layouts and components from text prompts. “Just describe what you need, and the feature will provide you with a first draft,” is how the company explained it when the feature launched.

The idea was that developers could use the feature to help get their ideas down quickly to begin exploring different design directions and then arrive at a solution faster, Figma said.

The feature was introduced at Figma’s Config conference last week, where the company explained that it was not trained on Figma content, community files or app designs, Field notes in his response on X.

“In other words, the accusations around data training in this tweet are false,” he said.

But in its haste to launch new AI features to remain competitive, the quality assurance work that should accompany new additions seems to have been overlooked.

Mirroring complaints in other industries, some designers immediately argued that Figma’s AI tools, like Make Design, would wipe out jobs by bringing digital design to the mass market, while others countered that AI would simply help to eliminate a lot of the repetitive work that went into design, allowing more interesting ideas to emerge.



Allen’s discovery that Figma essentially seemed to be copying other apps led to increased concern among the design community.

“Just a heads up to any designers using the new Make Designs feature that you may want to thoroughly check existing apps or modify the results heavily so that you don’t unknowingly land yourself in legal trouble,” Allen warned others on X.

Field responded by clarifying that Make Design uses off-the-shelf large language models, combined with “systems we commissioned to be used by these models.” He said the problem with this approach is that the variability is too low.



“Within hours of seeing [Allen’s] tweet, we identified the issue, which was related to the underlying design systems that were created,” Field wrote on X. “Ultimately it is my fault for not insisting on a better QA process for this work and pushing our team hard to hit a deadline for Config.”

Apple was not immediately available for comment. Figma pointed to Field’s tweets as its statement on the matter.

Field says Figma will temporarily disable the Make Design feature until the team is confident it can “stand behind its output.” The feature will be disabled as of Tuesday and will not be re-enabled until Figma has completed a full QA pass on the feature’s underlying design system.
 
Top