bnew

Veteran
Joined
Nov 1, 2015
Messages
51,805
Reputation
7,926
Daps
148,742







1/11
This is fast. Chrome running Gemini locally on my laptop. 2 lines of code.

2/11
No library or anything, it's a native part of some future version of Chrome

3/11
Does it work offline?

4/11
This is Chrome 128 Canary. You need to sign up for "Built-In AI proposal preview" to enable it

5/11
Seems very light on memory and CPU

6/11
wait why are they putting this into Chrome lol

are they trying to push this as a web standard of sorts or are they just going to keep this for themselves?

7/11
It's a proposal for all browsers

8/11
Query expansion like this could be promising

9/11
This is a great point!

10/11
No API key required? That would be great, I can run tons of instances of Chrome on the server as the back end of my wrapper apps.

11/11
Free, fast and private for everyone


To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196







Jun 25, 2024


Get Access to Gemini Nano Locally Using Chrome Canary​

You can access Gemini Nano locally using Chrome Canary. It lets you use cutting-edge AI in your browser.

Gemini Nano

Stay up to date​

Subscribe to AI Newsletter

Explore the power of Gemini Nano that is now available in Chrome Canary. While the official release is coming soon, you can already use Gemini Nano on your computer using Chrome Canary.

What is Gemini Nano?

Gemini Nano is a streamlined version of the larger Gemini model, designed to run locally. It uses the same datasets as as its predecessors. Gemini Nano keeps the original models' multimodal capabilities, but in a smaller form. Google had promised this in Chrome 126. But, it's now in Chrome Canary. This hints that an official release is near.

Benefits of Using Nano Locally

Using Nano locally offers numerous advantages. It enhances product quality drastically. Nano materials display unique properties. Locally sourced materials reduce transport costs. This approach minimizes environmental impacts. Fewer emissions result from local production. It streamlines supply chains efficiently. Locally produced goods boost economies. Customers appreciate nearby product origin. It increases trust with local sourcing. This practice supports community growth.

Running Gemini Nano locally offers several benefits.

  • Privacy: Local processing means data doesn't have to leave your device. This provides an extra layer of security and privacy.
  • Speed and Responsiveness: You don't need to send data to a server. So, interactions can be quicker, improving user experience.
  • Accessibility: Developers can add large language model capabilities to applications. Users don't need constant internet access.



What is Chrome Canary?

It's the most experimental version of the Google Chrome web browser, designed primarily for developers and tech enthusiasts who want to test the latest features and APIs before they are widely available. While it offers cutting-edge functionality, it is also more prone to crashes and instability due to its experimental nature.

  • Canary is updated daily with the latest changes, often with minimal or no testing from Google.
  • It is always three versions ahead of the Stable channel.
  • Canary includes all features of normal Chrome, plus experimental functionality.
  • It can run alongside other Chrome versions and is available for Windows, macOS, and Android.



Launching Gemini Nano Locally with Chrome Canary

To get started with Gemini Nano locally using Chrome Canary, follow these steps:

  1. Download and set up Chrome Canary, ensuring the language is set to English (United States).
  2. In the address bar, enter chrome://flags
  3. Set:
    • the 'Enables optimization guide on device' to Enabled BypassPerfRequirement
    • the 'Prompt API for Gemini Nano' to Enabled



Chrome Flags Enabled

  1. Restart Chrome.
  2. Wait for the Gemini Nano to download. To check the status, navigate to chrome://components and ensure that the Optimization Guide On Device Model shows version 2024.6.5.2205 or higher. If not, click 'Check for updates'.
  3. Congratulations! You're all set to explore Gemini Nano for chat applications. Although the model is significantly simpler, it's a major stride for website developers now having access to a local LLM for inference.
  4. You can chat with chrome AI model here: https://www.localhostai.xyz


Chat Gemini Nano Locally


Conclusion

Gemini Nano is now available on Chrome Canary, a big step forward for local AI. It processes data on your device, which increases privacy and speeds things up. This also makes advanced technology easier for more people to use. Gemini Nano gives developers and tech fans a new way to try out AI. This helps create a stronger and more efficient local tech community and shows what the future of independent digital projects might look like.
 
Last edited:

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,805
Reputation
7,926
Daps
148,742



There’s also a paper: [2406.04692] Mixture-of-Agents Enhances Large Language Model Capabilities

Mixture-of-Agents Enhances Large Language Model Capabilities
Recent advances in large language models (LLMs) demonstrate substantial capabilities in natural language understanding and generation tasks. With the growing number of LLMs, how to harness the collective expertise of multiple LLMs is an exciting open direction. Toward this goal, we propose a new approach that leverages the collective strengths of multiple LLMs through a Mixture-of-Agents (MoA) methodology. In our approach, we construct a layered MoA architecture wherein each layer comprises multiple LLM agents. Each agent takes all the outputs from agents in the previous layer as auxiliary information in generating its response. MoA models achieves state-of-art performance on AlpacaEval 2.0, MT-Bench and FLASK, surpassing GPT-4 Omni. For example, our MoA using only open-source LLMs is the leader of AlpacaEval 2.0 by a substantial gap, achieving a score of 65.1% compared to 57.5% by GPT-4 Omni.



1/1
Mixture of Agents—a framework that leverages the collective strengths of multiple LLMs. Each layer contains multiple agents that refine responses using outputs from the preceding layer.
Together MoA achieves a score of 65.1% on AlpacaEval 2.0.
Together MoA — collective intelligence of open-source models pushing the frontier of LLM capabilities


To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
GPzIHgqaIAAr00B.png

 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,805
Reputation
7,926
Daps
148,742






1/11
Claude 3.5 Sonnet transformed a research paper into an interactive learning dashboard in just 30 seconds.

It goes beyond the capabilities of GPT-4o, Gemini Pro, Llama and other existing LLMs.

Education will never be the same again with AI.

2/11
Find all the awesome LLM Apps demo with RAG in the following Github Repo.

P.S: Don't forget to star the repo to show your support 🌟
GitHub - Shubhamsaboo/awesome-llm-apps: Collection of awesome LLM apps with RAG using OpenAI, Anthropic, Gemini and opensource models.

3/11
Stay tuned to learn about more my experiments with Claude Sonnet 3.5.

If you find this useful, RT to share it with your friends.

Don't forget to follow me @Saboo_Shubham_ for more such LLMs tips and tutorials.

4/11
This is a great usecase...the best way to understand a concept is through interactive learning

5/11
Couldn't agree more.

6/11
This is seriously good.

7/11
Thank you!

8/11
Would be great to export it to an LMS directly.

9/11
That's a great idea. It could be the beginning of new kind of product.

10/11
awesome

11/11
Thanks!


To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,805
Reputation
7,926
Daps
148,742

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,805
Reputation
7,926
Daps
148,742

1/1
WhatsApp beta for Android 2.24.14.7: what's new?

WhatsApp is working on a feature to choose the Meta AI Llama model, and it will be available in a future update!

WhatsApp beta for Android 2.24.14.7: what's new? | WABetaInfo


To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
GRCPSc7XIAARV4U.jpg








WhatsApp beta for Android 2.24.14.7: what’s new?





Jun 27, 2024

WhatsApp is rolling out a new update through the Google Play Beta Program, bringing the version up to 2.24.14.7.

What’s new in this update? WhatsApp is working on a feature to choose the Meta AI Llama model, and it will be available in a future update!


Common questionsAnswers
Name of the feature?Meta Llama model
Status?Under development
Compatibility?WhatsApp is working on bringing a feature to choose the Meta AI Llama model to a future update of the app.
I’ve installed this update but I don’t have this feature. Why?This feature is under development, so it’s not ready for beta testers. However, it’s possible to discover a preview in this article.
Previous news?WhatsApp beta for iOS 24.13.10.71: what’s new? WhatsApp is rolling out a notification feature for media upload events!
OPEN WHATSAPP

META LLAMA MODEL​

In the article about the WhatsApp beta for Android 2.24.9.9 update, we announced that WhatsApp started limited testing of Meta AI on WhatsApp in different countries. This new phase of testing aimed to further explore how users interact with Meta AI through its integration into the search bar for select users and an alternative entry point within the top app bar for users in India, while preserving privacy of these interactions. It appears that WhatsApp is now committed to further improving Meta AI through advanced models, as discovered in the latest WhatsApp beta for Android 2.24.14.7 update from the Google Play Store.

Attached image

Attached image

As you can see from the attached screenshot, WhatsApp is exploring a feature to allow users to choose which model to use for Meta Llama, scheduled for release in a future update of the app. This feature appears to be intended for AI conversations, as indicated in the description of the section. The default option is Llama 3-70B, but WhatsApp has plans to offer a preview of the latest Llama 3-405B model for handling more complex prompts in the future. It’s important to note that the Llama 3-405B model will be available for a limited number of prompts each week. Once users reach this limit, they will be able to continue their conversation using the default model.

In our opinion, users will gain more control over their AI interactions by choosing which model to use based on their specific needs. This flexibility will allow users to tailor their experience, opting for the default model, Llama 3-70B, for quicker and simpler prompts or selecting the advanced Llama 3-405B model for more complex queries. Users will be able to leverage the faster response times of the default model for everyday use while choosing the latest model’s advanced capabilities for more intricate tasks, ensuring they get the best possible assistance for any situation.

A feature to choose the Meta AI Llama model is under development and it will be available in a future update. We will keep you updated with an additional article as soon as we have more details to share about this feature.

Stay up-to-date on WhatsApp news by following WABetaInfo on X. You can also discover other new features for WhatsApp beta for Android, iOS, Web/Desktop, and Windows.

Do you like this news? Please, let us know on X: we love hearing your feedback! If you’re curious to see where you can find WABetaInfo, there is a dedicated page where you can discover our services.

In addition, we have set up a Discord Server where you can chat with other people, get help and advice, and stay up to date with the latest announcements from WABetaInfo.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,805
Reputation
7,926
Daps
148,742

Character.AI now allows users to talk with AI avatars over calls​

Ivan Mehta

7:30 AM PDT • June 27, 2024

Comment

Character.ai displayed on smartphones
Image Credits: Character.ai

a16z-backed Character.AI said today that it is now allowing users to talk to AI characters over calls. The feature currently supports multiple languages, including English, Spanish, Portuguese, Russian, Korean, Japanese and Chinese.

The startup tested the calling feature ahead of today’s public launch. During that time, it said that more than 3 million users had made over 20 million calls. The company also noted that calls with AI characters can be useful for practicing language skills, giving mock interviews, or adding them to the gameplay of role-playing games.

Users can initiate calls with a user-generated AI character directly with a tap of a button. Character.AI said that this feature is built to make the calling experience smooth with reduced latency.

You can switch between calling and texting seamlessly, as well as stop the AI from talking through a “Tap to interrupt” option.

Earlier in March, Character.AI launched Character Voices so users can hear the characters speak in one-on-one chats. The company said with the feature allowing users to create their own voices there are over 1 million voices in the app’s library.

The company’s new calling feature comes at a time when users on Reddit are complaining about their characters’ changing personalities. In response to these complaints, the startup told 404 Media that the company didn’t make any major changes, but some users could have been part of a test and noticed behavioral differences in characters because of that.

Earlier this week, The Information reported that Google is working on its own customized chatbot platform to take on Meta and Character.AI.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,805
Reputation
7,926
Daps
148,742


1/1
Microsoft sneakily updated Phi-3 Mini!
> Significantly increased code understanding in Python, C++, Rust, and Typescript.
> Enhanced post-training for better-structured output.
> Improved multi-turn instruction following.
> Support for <|system|> tag.
> Improved reasoning and long-context understanding.

Updates were done to both 4K and 128K context model checkpoints! GG MSFT for the updates!

Phi-3 just become even better :D


To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
GReALe5XgAA8nKV.jpg




 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,805
Reputation
7,926
Daps
148,742


The moment we stopped understanding AI [AlexNet]​


Shared July 1, 2024

Thanks to KiwiCo for sponsoring today's video! Go to www.kiwico.com/welchlabs and use code WELCHLABS for 50% off your first month of monthly lines and/or for 20% off your first Panda Crate. Activation Atlas Posters! www.welchlabs.com/resources/5gtnaauv6nb9lrhoz9cp60… www.welchlabs.com/resources/activation-atlas-poste… www.welchlabs.com/resources/large-activation-atlas… www.welchlabs.com/resources/activation-atlas-poste… Special thanks to the Patrons: Juan Benet, Ross Hanson, Yan Babitski, AJ Englehardt, Alvin Khaled, Eduardo Barraza, Hitoshi Yamauchi, Jaewon Jung, Mrgoodlight, Shinichi Hayashi, Sid Sarasvati, Dominic Beaumont, Shannon Prater, Ubiquity Ventures, Matias Forti Welch Labs Ad free videos and exclusive perks: www.patreon.com/welchlabs Watch on TikTok: www.tiktok.com/@welchlabs Learn More or Contact: www.welchlabs.com/ Instagram: www.instagram.com/welchlabs X: twitter.com/welchlabs References AlexNet Paper proceedings.neurips.cc/paper_files/paper/2012/file… Original Activation Atlas Article- explore here - Great interactive Atlas! distill.pub/2019/activation-atlas/ Carter, et al., "Activation Atlas", Distill, 2019. Feature Visualization Article: distill.pub/2017/feature-visualization/ `Olah, et al., "Feature Visualization", Distill, 2017.` Great LLM Explainability work: transformer-circuits.pub/2024/scaling-monosemantic… Templeton, et al., "Scaling Monosemanticity: Extracting Interpretable Features from Claude 3 Sonnet", Transformer Circuits Thread, 2024. “Deep Visualization Toolbox" by Jason Yosinski video inspired many visuals: • Deep Visualization Toolbox Great LLM/GPT Intro paper arxiv.org/pdf/2304.10557 3B1Bs GPT Videos are excellent, as always: • Attention in transformers, visually e... • But what is a GPT? Visual intro to t... Andrej Kerpathy's walkthrough is amazing: • Let's build GPT: from scratch, in cod... Goodfellow’s Deep Learning Book www.deeplearningbook.org/ OpenAI’s 10,000 V100 GPU cluster (1+ exaflop) news.microsoft.com/source/features/innovation/open… GPT-3 size, etc: Language Models are Few-Shot Learners, Brown et al, 2020. Unique token count for ChatGPT: cookbook.openai.com/examples/how_to_count_tokens_w… GPT-4 training size etc, speculative: patmcguinness.substack.com/p/gpt-4-details-reveale… www.semianalysis.com/p/gpt-4-architecture-infrastr… Historical Neural Network Videos • Convolutional Network Demo from 1989 • Perceptron Research from the 50's & 6...
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,805
Reputation
7,926
Daps
148,742



Mixture-of-Agents Enhances Large Language Model Capabilities​


Junlin Wang, Jue Wang, Ben Athiwaratkun, Ce Zhang, James Zou

Recent advances in large language models (LLMs) demonstrate substantial capabilities in natural language understanding and generation tasks. With the growing number of LLMs, how to harness the collective expertise of multiple LLMs is an exciting open direction. Toward this goal, we propose a new approach that leverages the collective strengths of multiple LLMs through a Mixture-of-Agents (MoA) methodology. In our approach, we construct a layered MoA architecture wherein each layer comprises multiple LLM agents. Each agent takes all the outputs from agents in the previous layer as auxiliary information in generating its response. MoA models achieves state-of-art performance on AlpacaEval 2.0, MT-Bench and FLASK, surpassing GPT-4 Omni. For example, our MoA using only open-source LLMs is the leader of AlpacaEval 2.0 by a substantial gap, achieving a score of 65.1% compared to 57.5% by GPT-4 Omni.


Subjects: Computation and Language (cs.CL)
Cite as:arXiv:2406.04692 [cs.CL]
(or arXiv:2406.04692v1 [cs.CL] for this version)
[2406.04692] Mixture-of-Agents Enhances Large Language Model Capabilities

Submission history​

From: Jue Wang [view email]
[v1] Fri, 7 Jun 2024 07:04:10 UTC (861 KB)


 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,805
Reputation
7,926
Daps
148,742
Top