bnew

Veteran
Joined
Nov 1, 2015
Messages
56,129
Reputation
8,239
Daps
157,831

Our approach to AI safety​

Ensuring that AI systems are built, deployed, and used safely is critical to our mission.

OpenAI is committed to keeping powerful AI safe and broadly beneficial. We know our AI tools provide many benefits to people today. Our users around the world have told us that ChatGPT helps to increase their productivity, enhance their creativity, and offer tailored learning experiences. We also recognize that, like any technology, these tools come with real risks—so we work to ensure safety is built into our system at all levels.

Building increasingly safe AI systems​

Prior to releasing any new system we conduct rigorous testing, engage external experts for feedback, work to improve the model's behavior with techniques like reinforcement learning with human feedback, and build broad safety and monitoring systems.

For example, after our latest model, GPT-4, finished training, we spent more than 6 months working across the organization to make it safer and more aligned prior to releasing it publicly.

We believe that powerful AI systems should be subject to rigorous safety evaluations. Regulation is needed to ensure that such practices are adopted, and we actively engage with governments on the best form such regulation could take.

Learning from real-world use to improve safeguards​

We work hard to prevent foreseeable risks before deployment, however, there is a limit to what we can learn in a lab. Despite extensive research and testing, we cannot predict all of the beneficial ways people will use our technology, nor all the ways people will abuse it. That’s why we believe that learning from real-world use is a critical component of creating and releasing increasingly safe AI systems over time.

We cautiously and gradually release new AI systems—with substantial safeguards in place—to a steadily broadening group of people and make continuous improvements based on the lessons we learn.

We make our most capable models available through our own services and through an API so developers can build this technology directly into their apps. This allows us to monitor for and take action on misuse, and continually build mitigations that respond to the real ways people misuse our systems—not just theories about what misuse might look like.

Real-world use has also led us to develop increasingly nuanced policies against behavior that represents a genuine risk to people while still allowing for the many beneficial uses of our technology.

Crucially, we believe that society must have time to update and adjust to increasingly capable AI, and that everyone who is affected by this technology should have a significant say in how AI develops further. Iterative deployment has helped us bring various stakeholders into the conversation about the adoption of AI technology more effectively than if they hadn't had firsthand experience with these tools.

Protecting children​

One critical focus of our safety efforts is protecting children. We require that people must be 18 or older—or 13 or older with parental approval—to use our AI tools and are looking into verification options.

We do not permit our technology to be used to generate hateful, harassing, violent or adult content, among other categories. Our latest model, GPT-4 is 82% less likely to respond to requests for disallowed content compared to GPT-3.5 and we have established a robust system to monitor for abuse. GPT-4 is now available to ChatGPT Plus subscribers and we hope to make it available to even more people over time.

We have made significant effort to minimize the potential for our models to generate content that harms children. For example, when users try to upload Child Sexual Abuse Material to our image tools, we block and report it to the National Center for Missing and Exploited Children.

In addition to our default safety guardrails, we work with developers like the non-profit Khan Academy—which has built an AI-powered assistant that functions as both a virtual tutor for students and a classroom assistant for teachers—on tailored safety mitigations for their use case. We are also working on features that will allow developers to set stricter standards for model outputs to better support developers and users who want such functionality.

Respecting privacy​

Our large language models are trained on a broad corpus of text that includes publicly available content, licensed content, and content generated by human reviewers. We don’t use data for selling our services, advertising, or building profiles of people—we use data to make our models more helpful for people. ChatGPT, for instance, improves by further training on the conversations people have with it.

While some of our training data includes personal information that is available on the public internet, we want our models to learn about the world, not private individuals. So we work to remove personal information from the training dataset where feasible, fine-tune models to reject requests for personal information of private individuals, and respond to requests from individuals to delete their personal information from our systems. These steps minimize the possibility that our models might generate responses that include the personal information of private individuals.

Improving factual accuracy​

Today’s large language models predict the next series of words based on patterns they have previously seen, including the text input the user provides. In some cases, the next most likely words may not be factually accurate.

Improving factual accuracy is a significant focus for OpenAI and many other AI developers, and we’re making progress. By leveraging user feedback on ChatGPT outputs that were flagged as incorrect as a main source of data—we have improved the factual accuracy of GPT-4. GPT-4 is 40% more likely to produce factual content than GPT-3.5.

When users sign up to use the tool, we strive to be as transparent as possible that ChatGPT may not always be accurate. However, we recognize that there is much more work to do to further reduce the likelihood of hallucinations and to educate the public on the current limitations of these AI tools.

Continued research and engagement​

We believe that a practical approach to solving AI safety concerns is to dedicate more time and resources to researching effective mitigations and alignment techniques and testing them against real-world abuse.
Importantly, we also believe that improving AI safety and capabilities should go hand in hand. Our best safety work to date has come from working with our most capable models because they are better at following users’ instructions and easier to steer or “guide.”

We will be increasingly cautious with the creation and deployment of more capable models, and will continue to enhance safety precautions as our AI systems evolve.

While we waited over 6 months to deploy GPT-4 in order to better understand its capabilities, benefits, and risks, it may sometimes be necessary to take longer than that to improve AI systems' safety. Therefore, policymakers and AI providers will need to ensure that AI development and deployment is governed effectively at a global scale, so no one cuts corners to get ahead. This is a daunting challenge requiring both technical and institutional innovation, but it’s one that we are eager to contribute to.

Addressing safety issues also requires extensive debate, experimentation, and engagement, including on the bounds of AI system behavior. We have and will continue to foster collaboration and open dialogue among stakeholders to create a safe AI ecosystem.
 

IIVI

Superstar
Joined
Mar 11, 2022
Messages
11,424
Reputation
2,697
Daps
38,655
Reppin
Los Angeles
Countries are really about to fall behind worse than they are right now.


In other news, from my experience I'm hoping this works because I've seen ChatGPT do this personally:
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
56,129
Reputation
8,239
Daps
157,831

J3BliCgl.png
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
56,129
Reputation
8,239
Daps
157,831
[/U]

The newest version of ChatGPT passed the US medical licensing exam with flying colors — and diagnosed a 1 in 100,000 condition in seconds​


Hilary Brueck Apr 6, 2023, 4:03 PM

642c8f88d335200018dda6d5

OpenAI CEO Sam Altman. OpenAI developed ChatGPT, and its most refined network yet, GPT-4. Jason Redmond / AFP via Getty Images

  • A doctor and Harvard computer scientist says GPT-4 has better clinical judgment than "many doctors."
  • The chatbot can diagnose rare conditions "just as I would," he said.
  • But GPT-4 can also make mistakes, and it hasn't taken the Hippocratic oath.

Dr. Isaac Kohane, who's both a computer scientist at Harvard and a physician, teamed up with two colleagues to test drive GPT-4, with one main goal: To see how the newest artificial intelligence model from OpenAI performed in a medical setting.

"I'm stunned to say: better than many doctors I've observed," he says in the forthcoming book, "The AI Revolution in Medicine," co-authored by independent journalist Carey Goldberg, and Microsoft vice president of research Peter Lee. (The authors say neither Microsoft nor OpenAI required any editorial oversight of the book, though Microsoft has invested billions of dollars into developing OpenAI's technologies.)

In the book, Kohane says GPT-4, which was released in March 2023 to paying subscribers, answers US medical exam licensing questions correctly more than 90% of the time. It's a much better test-taker than previous ChatGPT AI models, GPT-3 and -3.5, and a better one than some licensed doctors, too.

GPT-4 is not just a good test-taker and fact finder, though. It's also a great translator. In the book it's capable of translating discharge information for a patient who speaks Portuguese, and distilling wonky technical jargon into something 6th graders could easily read.

As the authors explain with vivid examples, GPT-4 can also give doctors helpful suggestions about bedside manner, offering tips on how to talk to patients about their conditions in compassionate, clear language, and it can read lengthy reports or studies and summarize them in the blink of an eye. The tech can even explain its reasoning through problems in a way that requires some measure of what looks like human-style intelligence.

But if you ask GPT-4 how it does all this, it will likely tell you that all of its intelligence is still "limited to patterns in the data and does not involve true understanding or intentionality." That's what GPT-4 told the authors of the book, when they asked it if it could actually engage in causal reasoning. Even with such limitations, as Kohane discovered in the book, GPT-4 can mimic how doctors diagnose conditions with stunning — albeit imperfect — success.

How GPT-4 can diagnose like a doctor​

From left to right: Isaac Kohane with his wife Heidi Kohane and Brenda Rubin with her husband Larry Rubin, during the Celebrity Chef Evening to benefit the Hadassah Centennial Celebration at the Oriental Mandarin Hotel on Sunday, Nov. 4, 2012. Kohane is the Lawrence J. Henderson Professor of Pediatrics and Health Sciences and Technology at Harvard University, and director of the Children's Hospital Informatics Program. Larry Rubin is the CEO of Bernie & Phyl's Furniture.

Isaac Kohane is a doctor and computer scientist focused on the intersection of medicine and AI. Matthew J. Lee/The Boston Globe via Getty Images

Kohane goes through a clinical thought experiment with GPT-4 in the book, based on a real-life case that involved a newborn baby he treated several years earlier. Giving the bot a few key details about the baby he gathered from a physical exam, as well as some information from an ultrasound and hormone levels, the machine was able to correctly diagnose a 1 in 100,000 condition called congenital adrenal hyperplasia "just as I would, with all my years of study and experience," Kohane wrote.

The doctor was both impressed and horrified.

"On the one hand, I was having a sophisticated medical conversation with a computational process," he wrote, "on the other hand, just as mind blowing was the anxious realization that millions of families would soon have access to this impressive medical expertise, and I could not figure out how we could guarantee or certify that GPT-4's advice would be safe or effective."

GPT-4 isn't always right — and it has no ethical compass

GPT-4 isn't always reliable, and the book is filled with examples of its blunders. They range from simple clerical errors, like misstating a BMI that the bot had correctly calculated moments earlier, to math mistakes like inaccurately "solving" a Sudoku puzzle, or forgetting to square a term in an equation. The mistakes are often subtle, and the system has a tendency to assert it is right, even when challenged. It's not a stretch to imagine how a misplaced number or miscalculated weight could lead to serious errors in prescribing, or diagnosis.

Like previous GPTs, GPT-4 can also "hallucinate" — the technical euphemism for when AI makes up answers, or disobeys requests.

When asked about issue this by the authors of the book, GPT-4 said "I do not intend to deceive or mislead anyone, but I sometimes make mistakes or assumptions based on incomplete or inaccurate data. I also do not have the clinical judgment or the ethical responsibility of a human doctor or nurse."

One potential cross-check the authors suggest in the book is to start a new session with GPT-4, and have it "read over" and "verify" its own work with a "fresh set of eyes." This tactic sometimes works to reveal mistakes — though GPT-4 is somewhat reticent to admit when it's been wrong. Another error-catching suggestion is to command the bot to show you its work, so you can verify it, human-style.

It's clear that GPT-4 has the potential to free up precious time and resources in the clinic, allowing clinicians to be more present with patients, "instead of their computer screens," the authors write. But, they say, "we have to force ourselves to imagine a world with smarter and smarter machines, eventually perhaps surpassing human intelligence in almost every dimension. And then think very hard about how we want that world to work."
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
56,129
Reputation
8,239
Daps
157,831

Koala: A Dialogue Model for Academic Research​

This repo contains the weights diff against the base LLaMA for the Koala model. Check out the following links to get started:

License​

The model weights are intended for academic research only, subject to the model License of LLaMA, Terms of Use of the data generated by OpenAI, and Privacy Practices of ShareGPT. Any other usage of the model weights, including but not limited to commercial usage, is strictly prohibited. Please contact us If you find any potential violations. Our training and inference code is released under the Apache License 2.0."
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
56,129
Reputation
8,239
Daps
157,831

Magic Copy is a Chrome extension that uses Meta's Segment Anything Model to extract a foreground object from an image and copy it to the clipboard.



https://chrome.google.com/webstore/detail/nnifclicibdhgakebbnbfmomniihfmkg

(This might not be available yet, as the extension is still in review.)

Alternatively, the extension can be installed manually:

  1. Download the latest magic-copy.zip from releases.
  2. Extract the ZIP file.
  3. In Chrome, go to chrome://extensions/, enable "Developer mode", and click "Load unpacked".
  4. Select the folder where the extension was extracted.

Implementation​

This extension uses the same procedure as the Segment Anything Model demo to extract a foreground object from an image. The only difference is that the extracted object is copied to the clipboard instead of being displayed on the page.
 
Last edited:

bnew

Veteran
Joined
Nov 1, 2015
Messages
56,129
Reputation
8,239
Daps
157,831

Marrying Grounding DINO with Segment Anything & Stable Diffusion & BLIP - Automatically Detect , Segment and Generate Anything with Image and Text Inputs

Grounded-Segment-Anything​

We plan to create a very interesting demo by combining Grounding DINO and Segment Anything! Right now, this is just a simple small project. We will continue to improve it and create more interesting demos.

Why this project?

  • Segment Anything is a strong segmentation model. But it need prompts (like boxes/points) to generate masks.
  • Grounding DINO is a strong zero-shot detector which enable to generate high quality boxes and labels with free-form text.
  • The combination of the two models enable to detect and segment everything with text inputs!
  • The combination of BLIP + GroundingDINO + SAM for automatic labeling!
  • The combination of GroundingDINO + SAM + Stable-diffusion for data-factory, generating new data!
Grounded-SAM

Grounded-SAM + Stable-Diffusion Inpainting: Data-Factory, Generating New Data!

BLIP + Grounded-SAM: Automatic Label System!

Using BLIP to generate caption, extract tags and using Grounded-SAM for box and mask generating. Here's the demo output:



Imagine space

Some possible avenues for future work ...

  • Automatic image generation to construct new datasets.
  • Stronger foundation models with segmentation pre-training.
  • Colleboration with (Chat-)GPT.
  • A whole pipeline for automatically label image (with box and mask) and generate new image.
More Examples

🔥 What's New​

  • 🆕 Checkout our related human-face-edit branch here. We'll keep updating this branch with more interesting features. Here are some examples:



 

bnew

Veteran
Joined
Nov 1, 2015
Messages
56,129
Reputation
8,239
Daps
157,831

MetaSeg: Packaged version of the Segment Anything repository​



XUEW3AQ.gif
This repo is a packaged version of the segment-anything model.

pip install metaseg

Usage​

from metaseg import SegAutoMaskGenerator

# If gpu memory is not enough, reduce the points_per_side and points_per_batch.

# For image

autoseg_image = SegAutoMaskGenerator().save_image(
source="image.jpg",
model_type="vit_l", # vit_l, vit_h, vit_b
points_per_side=16,
points_per_batch=64,
min_area=0,
)

# For video

autoseg_video = SegAutoMaskGenerator().save_video(
source="video.mp4",
model_type="vit_l", # vit_l, vit_h, vit_b
points_per_side=16,
points_per_batch=64,
min_area=1000,
)

# For manuel box selection
from metaseg import SegManualMaskGenerator

seg_manual_mask_generator = SegManualMaskGenerator().save_image(
source="image.jpg",
model_type="vit_l", # vit_l, vit_h, vit_b
x0=100,
y0=100,
x1=200,
y1=200,
)

Extra Features​

  • Support for video files
  • Support for pip installation
  • Support for web application
  • Support for box to polygon conversion
  • Support for automatic download model weights
 
Top