bnew

Veteran
Joined
Nov 1, 2015
Messages
55,243
Reputation
8,195
Daps
156,187

It’s time for developers and enterprises to build with Gemini Pro


Dec 13, 2023
3 min read
Learn more about how to integrate Gemini Pro into your app or business at ai.google.dev.

Jeanine Banks

VP & GM, Developer X and DevRel

Burak Gokturk
VP & GM, Cloud AI & Industry Solutions



A graphic with text that says Build with Gemini

Last week, we announced Gemini, our largest and most capable AI model and the next step in our journey to make AI more helpful for everyone. It comes in three sizes: Ultra, Pro and Nano. We've already started rolling out Gemini in our products: Gemini Nano is in Android, starting with Pixel 8 Pro, and a specifically tuned version of Gemini Pro is in Bard.


Today, we’re making Gemini Pro available for developers and enterprises to build for your own use cases, and we’ll be further fine-tuning it in the weeks and months ahead as we listen and learn from your feedback.

Gemini Pro is available today

The first version of Gemini Pro is now accessible via the Gemini API and here’s more about it:

  • Gemini Pro outperforms other similarly-sized models on research benchmarks.
  • Today’s version comes with a 32K context window for text, and future versions will have a larger context window.
  • It’s free to use right now, within limits, and it will be competitively priced.
  • It comes with a range of features: function calling, embeddings, semantic retrieval and custom knowledge grounding, and chat functionality.
  • It supports 38 languages across 180+ countries and territories worldwide.
  • In today’s release, Gemini Pro accepts text as input and generates text as output. We’ve also made a dedicated Gemini Pro Vision multimodal endpoint available today that accepts text and imagery as input, with text output.
  • SDKs are available for Gemini Pro to help you build apps that run anywhere. Python, Android (Kotlin), Node.js, Swift and JavaScript are all supported.

A screenshot of a code snippet illustrating the SDKs supporting Gemini.


Gemini Pro has SDKs that help you build apps that run anywhere.​

Google AI Studio: The fastest way to build with Gemini

Google AI Studio is a free, web-based developer tool that enables you to quickly develop prompts and then get an API key to use in your app development. You can sign into Google AI Studio with your Google account and take advantage of the free quota, which allows 60 requests per minute — 20x more than other free offerings. When you’re ready, you can simply click on “Get code” to transfer your work to your IDE of choice, or use one of the quickstart templates available in Android Studio, Colab or Project IDX. To help us improve product quality, when you use the free quota, your API and Google AI Studio input and output may be accessible to trained reviewers. This data is de-identified from your Google account and API key.


Google AI Studio is a free, web-based developer tool that enables you to quickly develop prompts and then get an API key to use in your app development.​

Build with Vertex AI on Google Cloud

When it's time for a fully-managed AI platform, you can easily transition from Google AI Studio to Vertex AI, which allows for customization of Gemini with full data control and benefits from additional Google Cloud features for enterprise security, safety, privacy and data governance and compliance.

With Vertex AI, you will have access to the same Gemini models, and will be able to:
  • Tune and distill Gemini with your own company’s data, and augment it with grounding to include up-to-minute information and extensions to take real-world actions.
  • Build Gemini-powered search and conversational agents in a low code / no code environment, including support for retrieval-augmented generation (RAG), blended search, embeddings, conversation playbooks and more.
  • Deploy with confidence. We never train our models on inputs or outputs from Google Cloud customers. Your data and IP are always your data and IP.
To read more about our new Vertex AI capabilities, visit the Google Cloud blog.

Gemini Pro pricing

Right now, developers have free access to Gemini Pro and Gemini Pro Vision through Google AI Studio, with up to 60 requests per minute, making it suitable for most app development needs. Vertex AI developers can try the same models, with the same rate limits, at no cost until general availability early next year, after which there will be a charge per 1,000 characters or per image across Google AI Studio and Vertex AI.

A screenshot of input and output prices for Gemini Pro.


Big impact, small price: Because of our investments in TPUs, Gemini Pro can be served more efficiently.​

Looking ahead

We’re excited that Gemini is now available to developers and enterprises. As we continue to fine-tune it, your feedback will help us improve. You can learn more and start building with Gemini on ai.google.dev, or use Vertex AI’s robust capabilities on your own data with enterprise-grade controls.

Early next year, we’ll launch Gemini Ultra, our largest and most capable model for highly complex tasks, after further fine-tuning, safety testing and gathering valuable feedback from partners. We’ll also bring Gemini to more of our developer platforms like Chrome and Firebase.

We’re excited to see what you build with Gemini.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,243
Reputation
8,195
Daps
156,187

Imagen 2​

Our most advanced text-to-image technology

Imagen 2 is our most advanced text-to-image diffusion technology, delivering high-quality, photorealistic outputs that are closely aligned and consistent with the user’s prompt. It can generate more lifelike images by using the natural distribution of its training data, instead of adopting a pre-programmed style.

Imagen 2’s powerful text-to-image technology is available for developers and Cloud customers via the Imagen API in Google Cloud Vertex AI.

The Google Arts and Culture team is also deploying our Imagen 2 technology in their Cultural Icons experiment, allowing users to explore, learn and test their cultural knowledge with the help of Google AI.

A shot of a 32-year-old female, up and coming conservationist in a jungle; athletic with short, curly hair and a warm smile

Prompt: A shot of a 32-year-old female, up and coming conservationist in a jungle; athletic with short, curly hair and a warm smile

A jellyfish on a dark blue background

Prompt: A jellyfish on a dark blue background

Small canvas oil painting of an orange on a chopping board. Light is passing through orange segments, casting an orange light across part of the chopping board. There is a blue and white cloth in the background. Caustics, bounce light, expressive brush strokes

Prompt: Small canvas oil painting of an orange on a chopping board. Light is passing through orange segments, casting an orange light across part of the chopping board. There is a blue and white cloth in the background. Caustics, bounce light, expressive brush strokes



Improved image-caption understanding​

Text-to-image models learn to generate images that match a user’s prompt from details in their training datasets’ images and captions. But the quality of detail and accuracy in these pairings can vary widely for each image and caption.

To help create higher-quality and more accurate images that better align to a user’s prompt, further description was added to image captions in Imagen 2’s training dataset, helping Imagen 2 learn different captioning styles and generalize to better understand a broad range of user prompts.

These enhanced image-caption pairings help Imagen 2 better understand the relationship between images and words — increasing its understanding of context and nuance.

Here are examples of Imagen 2’s prompt understanding:

AI Image generated from prompt Soft purl the streams the birds renew their notes and through the air their mingled music floats (A Hymn to the Evening by Phillis Wheatley)

Prompt: “Soft purl the streams, the birds renew their notes, And through the air their mingled music floats.” (A Hymn to the Evening by Phillis Wheatley)

AI generated image of a painted underwater scene.

Prompt: “Consider the subtleness of the sea; how its most dreaded creatures glide under water, unapparent for the most part, and treacherously hidden beneath the loveliest tints of azure." (Moby-dikk by Herman Melville)

AI generated photo-realistic image of a singing robin

Prompt: ”The robin flew from his swinging spray of ivy on to the top of the wall and he opened his beak and sang a loud, lovely trill, merely to show off. Nothing in the world is quite as adorably lovely as a robin when he shows off - and they are nearly always doing it." (The Secret Garden by Frances Hodgson Burnett)



More realistic image generation​

Imagen 2’s dataset and model advances have delivered improvements in many of the areas that text-to-image tools often struggle with, including rendering realistic hands and human faces and keeping images free of distracting visual artifacts.

A collage of AI generated imagery showing realistic faces and hands

Examples of Imagen 2 generating realistic hands and human faces.

We trained a specialized image aesthetics model based on human preferences for qualities like good lighting, framing, exposure, sharpness, and more. Each image was given an aesthetics score which helped condition Imagen 2 to give more weight to images in its training dataset that align with qualities humans prefer. This technique improves Imagen 2’s ability to generate higher-quality images.

AI generated images of flowers with different aesthetics scores

AI-generated images using the prompt “Flower”, with lower aesthetics scores (left) to higher scores (right).



Fluid style conditioning​

Imagen 2’s diffusion-based techniques provide a high degree of flexibility, making it easier to control and adjust the style of an image. By providing reference style images in combination with a text prompt, we can condition Imagen 2 to generate new imagery that follows the same style.

A visualization of how Imagen 2 makes it easier to control the output style by using reference images alongside a text prompt.

A visualization of how Imagen 2 makes it easier to control the output style by using reference images alongside a text prompt.



Advanced inpainting and outpainting​

Imagen 2 also enables image editing capabilities like ‘inpainting’ and ‘outpainting’. By providing a reference image and an image mask, users can generate new content directly into the original image with a technique called inpainting, or extend the original image beyond its borders with outpainting. This technology is planned for Google Cloud’s Vertex AI in the new year.

Example of how Imagen 2 can generate new content directly into the original image with inpainting.

Imagen 2 can generate new content directly into the original image with inpainting.

Example of how Imagen 2 can extend the original image beyond its borders with outpainting.

Imagen 2 can extend the original image beyond its borders with outpainting.



Responsible by design​

To help mitigate the potential risks and challenges of our text-to-image generative technology, we set robust guardrails in place, from design and development to deployment in our products.

Imagen 2 is integrated with SynthID, our cutting-edge toolkit for watermarking and identifying AI-generated content, enabling allowlisted Google Cloud customers to add an imperceptible digital watermark directly into the pixels of the image, without compromising image quality. This allows the watermark to remain detectable by SynthID, even after applying modifications like filters, cropping, or saving with lossy compression schemes.

Before we release capabilities to users, we conduct robust safety testing to minimize the risk of harm. From the outset, we invested in training data safety for Imagen 2, and added technical guardrails to limit problematic outputs like violent, offensive, or sexually explicit content. We apply safety checks to training data, input prompts, and system-generated outputs at generation time. For example, we’re applying comprehensive safety filters to avoid generating potentially problematic content, such as images of named individuals. As we are expanding the capabilities and launches of Imagen 2, we are also continuously evaluating them for safety.



How Imagen 2 is powering text-to-image products across Google​




Resources​




Acknowledgements​

This work was made possible by key research and engineering contributions from:

Aäron van den Oord, Ali Razavi, Benigno Uria, Çağlar Ünlü, Charlie Nash, Chris Wolff, Conor Durkan, David Ding, Dawid Górny, Evgeny Gladchenko, Felix Riedel, Hang Qi, Jacob Kelly, Jakob Bauer, Jeff Donahue, Junlin Zhang, Mateusz Malinowski, Mikołaj Bińkowski, Pauline Luc, Robert Riachi, Robin Strudel, Sander Dieleman, Tobenna Peter Igwe, Yaroslav Ganin, Zach Eaton-Rosen.

Thanks to: Ben Bariach, Dawn Bloxwich, Ed Hirst, Elspeth White, Gemma Jennings, Jenny Brennan, Komal Singh, Luis C. Cobo, Miaosen Wang, Nick Pezzotti, Nicole Brichtova, Nidhi Vyas, Nina Anderson, Norman Casagrande, Sasha Brown, Sven Gowal, Tulsee Doshi, Will Hawkins, Yelin Kim, Zahra Ahmed for driving delivery; Douglas Eck, Nando de Freitas, Oriol Vinyals, Eli Collins, Demis Hassabis for their advice.

Thanks also to many others who contributed across Google DeepMind, including our partners in Google.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,243
Reputation
8,195
Daps
156,187





Model Summary​

Phi-2 is a Transformer with 2.7 billion parameters. It was trained using the same data sources as Phi-1.5, augmented with a new data source that consists of various NLP synthetic texts and filtered websites (for safety and educational value). When assessed against benchmarks testing common sense, language understanding, and logical reasoning, Phi-2 showcased a nearly state-of-the-art performance among models with less than 13 billion parameters.

Our model hasn't been fine-tuned through reinforcement learning from human feedback. The intention behind crafting this open-source model is to provide the research community with a non-restricted small model to explore vital safety challenges, such as reducing toxicity, understanding societal biases, enhancing controllability, and more.

Intended Uses​

Phi-2 is intended for research purposes only. Given the nature of the training data, the Phi-2 model is best suited for prompts using the QA format, the chat format, and the code format.


 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,243
Reputation
8,195
Daps
156,187

AI as good as doctors at checking X-rays - study


11th December 2023, 01:30 EST

Lee Bottomley
BBC News West Midlands

f39cbee0-9847-11ee-b916-4f014e91bb33.png.webp

University of Warwick
The software for checking X-rays was trained using 2.8m images and highly accurate, researchers said


Artificial Intelligence (AI) can analyse X-rays and diagnose medical issues just as well as doctors, a study has claimed.

Software was trained using chest X-rays from more than 1.5m patients, and scanned for 37 possible conditions.

It was just as accurate or more accurate than doctors' analysis at the time the image was taken for 35 out of 37 conditions, the University of Warwick said.

The AI could reduce doctors' workload and delays in diagnosis, and offer radiologists the "ultimate second opinion", researchers added.

The software understood that some abnormalities for which it scanned were more serious than others, and could flag the most urgent to medics, the university said.

To check the results were accurate, more than 1,400 X-rays analysed by the software were cross-examined by senior radiologists.

They then compared the diagnoses made by the AI with those made by radiologists at the time.


'Future of medicine'


The software, called X-Raydar, removed human error and bias, said lead author, Dr Giovanni Montana, Professor of Data Science at Warwick University.

"If a patient is referred for an X-ray with a heart problem, doctors will inevitably focus on the heart over the lungs," he said.

“This is totally understandable but runs the risk of undetected problems in other areas".

AI such as this would be the "future of medicine" and act as a "co-pilot for busy doctors", said co-author, Professor Vicky Goh of King’s College London.

The AI X-ray tool was a collaboration between Warwick University, King’s College London and the NHS, and funded by the Wellcome Trust.

The software was available open source for non-commercial use to increase the pace of research development, the university added.

Follow BBC West Midlands on Facebook, X and Instagram. Send your story ideas to: newsonline.westmidlands@bbc.co.uk
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,243
Reputation
8,195
Daps
156,187







FunSearch: Making new discoveries in mathematical sciences using Large Language Models​


Published

14 DECEMBER 2023

Authors

Alhussein Fawzi and Bernardino Romera Paredes

Snippets of code and colourful streams of light


By searching for “functions” written in computer code, FunSearch made the first discoveries in open problems in mathematical sciences using LLMs

Large Language Models (LLMs) are useful assistants - they excel at combining concepts and can read, write and code to help people solve problems. But could they discover entirely new knowledge?

As LLMs have been shown to “hallucinate” factually incorrect information, using them to make verifiably correct discoveries is a challenge. But what if we could harness the creativity of LLMs by identifying and building upon only their very best ideas?

Today, in a paper published in Nature, we introduce FunSearch, a method to search for new solutions in mathematics and computer science. FunSearch works by pairing a pre-trained LLM, whose goal is to provide creative solutions in the form of computer code, with an automated “evaluator”, which guards against hallucinations and incorrect ideas. By iterating back-and-forth between these two components, initial solutions “evolve” into new knowledge. The system searches for “functions” written in computer code; hence the name FunSearch.

This work represents the first time a new discovery has been made for challenging open problems in science or mathematics using LLMs. FunSearch discovered new solutions for the cap set problem, a longstanding open problem in mathematics. In addition, to demonstrate the practical usefulness of FunSearch, we used it to discover more effective algorithms for the “bin-packing” problem, which has ubiquitous applications such as making data centers more efficient.

Scientific progress has always relied on the ability to share new understanding. What makes FunSearch a particularly powerful scientific tool is that it outputs programs that reveal how its solutions are constructed, rather than just what the solutions are. We hope this can inspire further insights in the scientists who use FunSearch, driving a virtuous cycle of improvement and discovery.



Driving discovery through evolution with language models​

FunSearch uses an evolutionary method powered by LLMs, which promotes and develops the highest scoring ideas. These ideas are expressed as computer programs, so that they can be run and evaluated automatically. First, the user writes a description of the problem in the form of code. This description comprises a procedure to evaluate programs, and a seed program used to initialize a pool of programs.

FunSearch is an iterative procedure; at each iteration, the system selects some programs from the current pool of programs, which are fed to an LLM. The LLM creatively builds upon these, and generates new programs, which are automatically evaluated. The best ones are added back to the pool of existing programs, creating a self-improving loop. FunSearch uses Google’s PaLM 2, but it is compatible with other LLMs trained on code.

A diagram of the FunSearch process showing screenshots of code, a network and images of graphs with checkmarks and x's.

The FunSearch process. The LLM is shown a selection of the best programs it has generated so far (retrieved from the programs database), and asked to generate an even better one. The programs proposed by the LLM are automatically executed, and evaluated. The best programs are added to the database, for selection in subsequent cycles. The user can at any point retrieve the highest-scoring programs discovered so far.

Discovering new mathematical knowledge and algorithms in different domains is a notoriously difficult task, and largely beyond the power of the most advanced AI systems. To tackle such challenging problems with FunSearch, we introduced multiple key components. Instead of starting from scratch, we start the evolutionary process with common knowledge about the problem, and let FunSearch focus on finding the most critical ideas to achieve new discoveries. In addition, our evolutionary process uses a strategy to improve the diversity of ideas in order to avoid stagnation. Finally, we run the evolutionary process in parallel to improve the system efficiency.



Breaking new ground in mathematics​

We first address the cap set problem, an open challenge, which has vexed mathematicians in multiple research areas for decades. Renowned mathematician Terence Tao once described it as his favorite open question. We collaborated with Jordan Ellenberg, a professor of mathematics at the University of Wisconsin–Madison, and author of an important breakthrough on the cap set problem.

The problem consists of finding the largest set of points (called a cap set) in a high-dimensional grid, where no three points lie on a line. This problem is important because it serves as a model for other problems in extremal combinatorics - the study of how large or small a collection of numbers, graphs or other objects could be. Brute-force computing approaches to this problem don’t work – the number of possibilities to consider quickly becomes greater than the number of atoms in the universe.

FunSearch generated solutions - in the form of programs - that in some settings discovered the largest cap sets ever found. This represents the largest increase in the size of cap sets in the past 20 years. Moreover, FunSearch outperformed state-of-the-art computational solvers, as this problem scales well beyond their current capabilities.

Interactive figure showing the evolution from the seed program (top) to a new higher-scoring function (bottom). Each circle is a program, with its size proportional to the score assigned to it. Only ancestors of the program at the bottom are shown. The corresponding function produced by FunSearch for each node is shown on the right (see full program using this function in the paper).

These results demonstrate that the FunSearch technique can take us beyond established results on hard combinatorial problems, where intuition can be difficult to build. We expect this approach to play a role in new discoveries for similar theoretical problems in combinatorics, and in the future it may open up new possibilities in fields such as communication theory.



FunSearch favors concise and human-interpretable programs​

While discovering new mathematical knowledge is significant in itself, the FunSearch approach offers an additional benefit over traditional computer search techniques. That’s because FunSearch isn’t a black box that merely generates solutions to problems. Instead, it generates programs that describe how those solutions were arrived at. This show-your-working approach is how scientists generally operate, with new discoveries or phenomena explained through the process used to produce them.

FunSearch favors finding solutions represented by highly compact programs - solutions with a low Kolmogorov complexity†. Short programs can describe very large objects, allowing FunSearch to scale to large needle-in-a-haystack problems. Moreover, this makes FunSearch’s program outputs easier for researchers to comprehend. Ellenberg said: “FunSearch offers a completely new mechanism for developing strategies of attack. The solutions generated by FunSearch are far conceptually richer than a mere list of numbers. When I study them, I learn something”.

What’s more, this interpretability of FunSearch’s programs can provide actionable insights to researchers. As we used FunSearch we noticed, for example, intriguing symmetries in the code of some of its high-scoring outputs. This gave us a new insight into the problem, and we used this insight to refine the problem introduced to FunSearch, resulting in even better solutions. We see this as an exemplar for a collaborative procedure between humans and FunSearch across many problems in mathematics.

A composite image with a few lines of visible computer programming code on the right and an unreadably large amount of code on the left.

Left: Inspecting code generated by FunSearch yielded further actionable insights (highlights added by us). Right: The raw “admissible” set constructed using the (much shorter) program on the left.


The solutions generated by FunSearch are far conceptually richer than a mere list of numbers. When I study them, I learn something.

JORDAN ELLENBERG, COLLABORATOR AND PROFESSOR OF MATHEMATICS AT THE UNIVERSITY OF WISCONSIN–MADISON


Addressing a notoriously hard challenge in computing​

Encouraged by our success with the theoretical cap set problem, we decided to explore the flexibility of FunSearch by applying it to an important practical challenge in computer science. The “bin packing” problem looks at how to pack items of different sizes into the smallest number of bins. It sits at the core of many real-world problems, from loading containers with items to allocating compute jobs in data centers to minimize costs.

The online bin-packing problem is typically addressed using algorithmic rules-of-thumb (heuristics) based on human experience. But finding a set of rules for each specific situation - with differing sizes, timing, or capacity – can be challenging. Despite being very different from the cap set problem, setting up FunSearch for this problem was easy. FunSearch delivered an automatically tailored program (adapting to the specifics of the data) that outperformed established heuristics – using fewer bins to pack the same number of items.

Play

Illustrative example of bin packing using existing heuristic – Best-fit heuristic (left), and using a heuristic discovered by FunSearch (right).

Hard combinatorial problems like online bin packing can be tackled using other AI approaches, such as neural networks and reinforcement learning. Such approaches have proven to be effective too, but may also require significant resources to deploy. FunSearch, on the other hand, outputs code that can be easily inspected and deployed, meaning its solutions could potentially be slotted into a variety of real-world industrial systems to bring swift benefits.



LLM-driven discovery for science and beyond​

FunSearch demonstrates that if we safeguard against LLMs’ hallucinations, the power of these models can be harnessed not only to produce new mathematical discoveries, but also to reveal potentially impactful solutions to important real-world problems.

We envision that for many problems in science and industry - longstanding or new - generating effective and tailored algorithms using LLM-driven approaches will become common practice.

Indeed, this is just the beginning. FunSearch will improve as a natural consequence of the wider progress of LLMs, and we will also be working to broaden its capabilities to address a variety of society’s pressing scientific and engineering challenges.


 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,243
Reputation
8,195
Daps
156,187

Weak-to-strong generalization​

Weak To Strong Generalization

Justin Jay Wang ✗ DALL·E

We present a new research direction for superalignment, together with promising initial results: can we leverage the generalization properties of deep learning to control strong models with weak supervisors?.

December 14, 2023

More resources​

Safety & Alignment

A core challenge for aligning future superhuman AI systems (superalignment) is that humans will need to supervise AI systems much smarter than them. We study a simple analogy: can small models supervise large models? We show that we can use a GPT-2-level model to elicit most of GPT-4’s capabilities—close to GPT-3.5-level performance—generalizing correctly even to hard problems where the small model failed. This opens up a new research direction that allows us to directly tackle a central challenge of aligning future superhuman models while making iterative empirical progress today.

The superalignment problem​

We believe superintelligence—AI vastly smarter than humans—could be developed within the next ten years. However, we still do not know how to reliably steer and control superhuman AI systems. Solving this problem is essential for ensuring that even the most advanced AI systems in the future remain safe and beneficial to humanity.

We formed the Superalignment team earlier this year to solve this problem of superintelligence alignment. Today, we are releasing the team’s first paper, which introduces a new research direction for empirically aligning superhuman models.

Current alignment methods, such as reinforcement learning from human feedback (RLHF), rely on human supervision. However, future AI systems will be capable of extremely complex and creative behaviors that will make it hard for humans to reliably supervise them. For example, superhuman models may be able to write millions of lines of novel—and potentially dangerous—computer code that would be very hard even for expert humans to understand.

Relative to superhuman AI models, humans will be “weak supervisors.” This is a core challenge for AGI alignment: how can weak supervisors trust and control substantially stronger models?


Our setup​

To make progress on this core challenge, we propose an analogy we can empirically study today: can we use a smaller (less capable) model to supervise a larger (more capable) model?

SuperAlignmentBlog_Artwork_Transparent.png

A simple analogy for superalignment: In traditional machine learning (ML), humans supervise AI systems weaker than themselves (left). To align superintelligence, humans will instead need to supervise AI systems smarter than them (center). We cannot directly study this problem today, but we can study a simple analogy: can small models supervise larger models (right)?

Naively, we might not expect a strong model to perform better than the weak supervisor that provides its training signal—it may simply learn to imitate all the errors the weak supervisor makes. On the other hand, strong pretrained models have excellent raw capabilities—we don't need to teach them new tasks from scratch, we just need to elicit their latent knowledge. The critical question is then: will the strong model generalize according to the weak supervisor's underlying intent—leveraging its full capabilities to solve the task even on difficult problems where the weak supervisor can only provide incomplete or flawed training labels?


Our results​


spAgdY4.png

Typical weak-to-strong generalization across NLP benchmarks: We use a GPT-2-level model as a weak supervisor to finetune GPT-4.

We can significantly improve generalization in many settings. We use a simple method that encourages the strong model to be more confident—including confidently disagreeing with the weak supervisor if necessary. When we supervise GPT-4 with a GPT-2-level model using this method on NLP tasks, the resulting model typically performs somewhere between GPT-3 and GPT-3.5. We are able to recover much of GPT-4’s capabilities with only much weaker supervision.

This method is a proof of concept with important limitations; for example, it still doesn’t work on ChatGPT preference data. However, we also find signs of life with other approaches, such as optimal early stopping and bootstrapping from small to intermediate to large models.

Collectively, our results suggest that (1) naive human supervision—such as reinforcement learning from human feedback (RLHF)—could scale poorly to superhuman models without further work, but (2) it is feasible to substantially improve weak-to-strong generalization.


Research opportunities​

There are still important disanalogies between our current empirical setup and the ultimate problem of aligning superhuman models. For example, it may be easier for future models to imitate weak human errors than for current strong models to imitate current weak model errors, which could make generalization harder in the future.

Nevertheless, we believe our setup captures some key difficulties of aligning future superhuman models, enabling us to start making empirical progress on this problem today. There are many promising directions for future work, including fixing the disanalogies in our setup, developing better scalable methods, and advancing our scientific understanding of when and how we should expect good weak-to-strong generalization.

We believe this is an exciting opportunity for the ML research community to make progress on alignment. To kickstart more research in this area,


  • We are releasing open source code to make it easy to get started with weak-to-strong generalization experiments today.
  • We are launching a $10 million grants program for graduate students, academics, and other researchers to work on superhuman AI alignment broadly. We’re especially excited to support research related to weak-to-strong generalization.
Figuring out how to align future superhuman AI systems to be safe has never been more important, and it is now easier than ever to make empirical progress on this problem. We are excited to see what breakthroughs researchers discover.



 
Last edited:

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,243
Reputation
8,195
Daps
156,187

News Execs: Google’s AI Integration Could ‘Kill Journalism’​

BY PYMNTS | DECEMBER 14, 2023

Google AI

News publishers are reportedly alarmed by what Google’s AI means for their business.

As the Wall Street Journal (WSJ) reported Thursday (Dec. 14), Google’s move to integrate its “Search Generative Experience” artificial intelligence (AI) tool has underlined the dangers for media companies of depending on the tech giant to bring their stories to readers.

The report notes that news outlets are already suffering from a drop in traffic from social media sites, with both Facebook and Twitter moving away from news distribution. Google’s AI-powered search could make things worse, as Google generates almost 40% of publishers’ traffic, the report said, citing a WSJ analysis of data from Similarweb.

The WSJ also cites findings from a task force at the news magazine The Atlantic, which gets about 40% of its web traffic from Google searches.

That task force studied what would happen if Google integrated AI into search, and found that 75% of the time, the AI-powered search would deliver a complete answer to a user’s question and deprive the Atlantic of that potential traffic.

“AI and large language models have the potential to destroy journalism and media brands as we know them,” said Mathias Dopfner, chairman and CEO of Axel Springer.

His company, which owns Business Insider and Politico, announced Wednesday (Dec. 13) a deal to license content to OpenAI.

“We want to explore the opportunities of AI empowered journalism — to bring quality, societal relevance and the business model of journalism to the next level,” Dopfer said in announcing that partnership.

According to the WSJ report, Google says its AI product is still being developed, though publishers say they have seen enough to determine they’ll lose 20% to 40% of Google-generated traffic. Google has said it is prioritizing sending traffic to publishers.

However, news publishers aren’t just concerned about AI’s impact on traffic. Earlier this year, a group of major news media organizations called for the regulation of AI model operators on their use of copyrighted materials.

Industry bodies such as the News Media Alliance and the European Publishers’ Council are pushing for a framework that lets them “collectively negotiate” with AI model operators regarding the use of their intellectual property, according to a letter published in August.

“Generative AI and large language models … disseminate that content and information to their users, often without any consideration of, remuneration to, or attribution to the original creators,” the letter said. “Such practices undermine the media industry’s core business models.”
 
Top