SnowflakesByTheOZ

Sorry, that's another B
Joined
Feb 12, 2017
Messages
13,887
Reputation
3,400
Daps
57,135

About​

one-click deepfake (face swap)

Take a video and replace the face in it with a face of your choice. You only need one image of the desired face. No dataset, no training.

That's it, that's the software. You can watch some demos here.

demo-gif

Installation​


There are two types of installations: basic and gpu-powered.

  • Basic: It is more likely to work on your computer but it will also be very slow. You can follow instructions for the basic install here.
  • GPU: If you have a good GPU and are ready for solving any software issues you may face, you can enable GPU which is wayyy faster. To do this, first follow the basic install instructions given above and then follow GPU-specific instructions here.

Usage​


Executing python run.py command will launch this window: gui-demo

Choose a face (image with desired face) and the target image/video (image/video in which you want to replace the face) and click on Start. Open file explorer and navigate to the directory you select your output to be in. You will find a directory named <video_title> where you can see the frames being swapped in realtime. Once the processing is done, it will create the output file. That's it.

Don't touch the FPS checkbox unless you know what you are doing.


Any brehs try this out ?
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
56,211
Reputation
8,251
Daps
157,906

AI Demonstrates Superior Performance in Predicting Breast Cancer​


TOPICS: Artificial Intelligence Breast Cancer Cancer Popular Radiological Society Of North America
By RADIOLOGICAL SOCIETY OF NORTH AMERICA JUNE 7, 2023

Artificial Intelligence AI Success Art

In a comprehensive study published in the journal Radiology, artificial intelligence (AI) algorithms demonstrated superior performance to the standard clinical risk model in predicting the five-year risk for breast cancer.

AI algorithms outperformed traditional clinical risk models in a large-scale study, predicting five-year breast cancer risk more accurately. These models use mammograms as the single data source, offering potential advantages in individualizing patient care and enhancing prediction efficiency.

In a large study of thousands of mammograms, artificial intelligence (AI) algorithms outperformed the standard clinical risk model for predicting the five-year risk for breast cancer. The results of the study were published in Radiology, a journal of the Radiological Society of North America (RSNA).

A woman’s risk of breast cancer is typically calculated using clinical models such as the Breast Cancer Surveillance Consortium (BCSC) risk model, which uses self-reported and other information on the patient—including age, family history of the disease, whether she has given birth, and whether she has dense breasts—to calculate a risk score.


“Clinical risk models depend on gathering information from different sources, which isn’t always available or collected,” said lead researcher Vignesh A. Arasu, M.D., Ph.D., a research scientist and practicing radiologist at Kaiser Permanente Northern California. “Recent advances in AI deep learning provide us with the ability to extract hundreds to thousands of additional mammographic features.”
AI-Outperformed-Standard-Risk-Model-for-Predicting-Breast-Cancer-777x698.jpg

Right medial lateral oblique (RMLO) screening mammograms show negative results from 2016 in (A) a 73-year-old woman with Mirai artificial intelligence (AI) risk score with more than 90th percentile risk who developed right breast cancer in 2021 at 5 years of follow-up and (B) a 73-year-old woman with Mirai AI risk score with less than 10th percentile risk who did not develop cancer at 5 years after 5 years of follow-up. Credit: Radiological Society of North America


In the retrospective study, Dr. Arasu used data associated with negative (showing no visible evidence of cancer) screening 2D mammograms performed at Kaiser Permanente Northern California in 2016. Of the 324,009 women screened in 2016 who met eligibility criteria, a random sub-cohort of 13,628 women was selected for analysis. Additionally, all 4,584 patients from the eligibility pool who were diagnosed with cancer within five years of the original 2016 mammogram were also studied. All the women were followed until 2021.

“We selected from the entire year of screening mammograms performed in 2016, so our study population is representative of communities in Northern California,” Dr. Arasu said.

The researchers divided the five-year study period into three time periods: interval cancer risk, or incident cancers diagnosed between 0 and 1 years; future cancer risk, or incident cancers diagnosed from between one and five years; and all cancer risk, or incident cancers diagnosed between 0 and 5 years.

Using the 2016 screening mammograms, risk scores for breast cancer over the five-year period were generated by five AI algorithms, including two academic algorithms used by researchers and three commercially available algorithms. The risk scores were then compared to each other and to the BCSC clinical risk score.

“All five AI algorithms performed better than the BCSC risk model for predicting breast cancer risk at 0 to 5 years,” Dr. Arasu said. “This strong predictive performance over the five-year period suggests AI is identifying both missed cancers and breast tissue features that help predict future cancer development. Something in mammograms allows us to track breast cancer risk. This is the ‘black box’ of AI.”

Some of the AI algorithms excelled at predicting patients at high risk of interval cancer, which is often aggressive and may require a second reading of mammograms, supplementary screening, or short-interval follow-up imaging. When evaluating women with the highest 10% risk as an example, AI predicted up to 28% of cancers compared to 21% predicted by BCSC.

Even AI algorithms trained for short time horizons (as low as 3 months) were able to predict the future risk of cancer up to five years when no cancer was clinically detected by screening mammography. When used in combination, the AI and BCSC risk models further improved cancer prediction.

“We’re looking for an accurate, efficient and scalable means of understanding a women’s breast cancer risk,” Dr. Arasu said. “Mammography-based AI risk models provide practical advantages over traditional clinical risk models because they use a single data source: the mammogram itself.”

Dr. Arasu said some institutions are already using AI to help radiologists detect cancer on mammograms. A person’s future risk score, which takes seconds for AI to generate, could be integrated into the radiology report shared with the patient and their physician.

“AI for cancer risk prediction offers us the opportunity to individualize every woman’s care, which isn’t systematically available,” he said. “It’s a tool that could help us provide personalized, precision medicine on a national level.”

Reference: “Comparison of Mammography AI Algorithms with a Clinical Risk Model for 5-year Breast Cancer Risk Prediction: An Observational Study” by Vignesh A. Arasu, Laurel A. Habel, Ninah S. Achacoso, Diana S. M. Buist, Jason B. Cord, Laura J. Esserman, Nola M. Hylton, M. Maria Glymour, John Kornak, Lawrence H. Kushi, Donald A. Lewis, Vincent X. Liu, Caitlin M. Lydon, Diana L. Miglioretti, Daniel A. Navarro, Albert Pu, Li Shen, Weiva Sieh, Hyo-Chun Yoon and Catherine Lee, 6 June 2023, Radiology.
DOI: 10.1148/radiol.222733

Collaborating with Dr. Arasu were Laurel A. Habel, Ph.D., Ninah S. Achacoso, M.S., Diana S. M. Buist, Ph.D., Jason B. Cord, M.D., Laura J. Esserman, M.D., Nola. M. Hylton, Ph.D., M. Maria Glymour, Sc.D., John Kornak, Ph.D., Lawrence H. Kushi, Sc.D., Don A. Lewis, M.S., Vincent X. Liu, M.D., Caitlin M. Lydon, M.P.H., Diana L. Miglioretti, Ph.D., Daniel A. Navarro, M.D., Albert Pu, M.S., Li Shen, Ph.D., Weiva Sieh, M.D., Ph.D., Hyo-Chun Yoon, M.D., Ph.D., and Catherine Lee, Ph.D.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
56,211
Reputation
8,251
Daps
157,906


Researchers discover that ChatGPT prefers repeating 25 jokes over and over​

When tested, "Over 90% of 1,008 generated jokes were the same 25 jokes."​

BENJ EDWARDS - 6/9/2023, 5:42 PM

An AI-generated image of

Enlarge / An AI-generated image of "a laughing robot."

On Wednesday, two German researchers, Sophie Jentzsch and Kristian Kersting, released a paper that examines the ability of OpenAI's ChatGPT-3.5 to understand and generate humor. In particular, they discovered that ChatGPT's knowledge of jokes is fairly limited: During a test run, 90 percent of 1,008 generations were the same 25 jokes, leading them to conclude that the responses were likely learned and memorized during the AI model's training rather than being newly generated.

The two researchers, associated with the Institute for Software Technology, German Aerospace Center (DLR), and Technical University Darmstadt, explored the nuances of humor found within ChatGPT's 3.5 version (not the newer GPT-4 version) through a series of experiments focusing on joke generation, explanation, and detection. They conducted these experiments by prompting ChatGPT without having access to the model's inner workings or data set.

"To test how rich the variety of ChatGPT’s jokes is, we asked it to tell a joke a thousand times," they write. "All responses were grammatically correct. Almost all outputs contained exactly one joke. Only the prompt, 'Do you know any good jokes?' provoked multiple jokes, leading to 1,008 responded jokes in total. Besides that, the variation of prompts did have any noticeable effect."

Their results align with our practical experience while evaluating ChatGPT's humor ability in a feature we wrote that compared GPT-4 to Google Bard. Also, in the past, several people online have noticed that when asked for a joke, ChatGPT frequently returns, "Why did the tomato turn red? / Because it saw the salad dressing."

It's no surprise then that Jentzsch and Kersting found the "tomato" joke to be GPT-3.5's second-most-common result. In the paper's appendix, they listed the top 25 most frequently generated jokes in order of occurrence. Below, we've listed the top 10 with the exact number of occurrences (among the 1,008 generations) in parenthesis:

Q: Why did the scarecrow win an award? (140)
A: Because he was outstanding in his field.
Q: Why did the tomato turn red? (122)
A: Because it saw the salad dressing.
Q: Why was the math book sad? (121)
A: Because it had too many problems.
Q: Why don't scientists trust atoms? (119)
A: Because they make up everything.
Q: Why did the cookie go to the doctor? (79)
A: Because it was feeling crumbly.
Q: Why couldn't the bicycle stand up by itself? (52)
A: Because it was two-tired.
Q: Why did the frog call his insurance company? (36)
A: He had a jump in his car.
Q: Why did the chicken cross the playground? (33)
A: To get to the other slide.
Q: Why was the computer cold? (23)
A: Because it left its Windows open.
Q: Why did the hipster burn his tongue? (21)
A: He drank his coffee before it was cool.
Even so, a small number of the generated responses were unique, but it seems that ChatGPT mostly created them by mixing elements from different jokes it already knows. The researchers found that the language model's original creations didn't always make sense, such as, "Why did the man put his watch in the blender? He wanted to make time fly."

When asked to explain each of the 25 most frequent jokes, ChatGPT mostly provided valid explanations according to the researchers' methodology, indicating an "understanding" of stylistic elements such as wordplay and double meanings. However, it struggled with sequences that didn't fit into learned patterns and couldn't tell when a joke wasn't funny. Instead, it would make up fictional yet plausible-sounding explanations.
An AI-generated image of a laughing robot.
Enlarge / An AI-generated image of "a laughing robot."

For example, when asked to explain a nonsensical joke it generated ("Can you explain why this joke is funny: Why did the cookie go to the gym? To get a-cookie-dized."), ChatGPT wrote, "The joke is a play on words. To get 'a-cookie-dized' is a pun on the phrase 'to get categorized,' with 'cookie' being substituted for 'categorized.' The joke is that the cookie is going to the gym to become more fit and healthy, as if it were a person, but this is also implying that the cookie is trying to fit into a certain category or mold."

In general, Jentzsch and Kersting found that ChatGPT's detection of jokes was heavily influenced by the presence of joke "surface characteristics" like a joke's structure, the presence of wordplay, or inclusion of puns, showing a degree of "understanding" of humor elements.

Reacting to the study on Twitter, Scale AI prompt engineer Riley Goodside blamed ChatGPT's lack of humor on reinforcement learning through human feedback (RLHF), a technique that guides language model training by gathering human feedback: "The most visible effect of RLHF is that the model follows orders, and base LLMs are much harder to prompt in practice. But that benefit isn’t free—you pay for it in creativity, more or less."

Despite ChatGPT's limitations in joke generation and explanation, the researchers pointed out that its focus on content and meaning in humor indicates progress toward a more comprehensive research understanding of humor in language models:

"The observations of this study illustrate how ChatGPT rather learned a specific joke pattern instead of being able to be actually funny," the researchers write. "Nevertheless, in the generation, the explanation, and the identification of jokes, ChatGPT’s focus bears on content and meaning and not so much on superficial characteristics. These qualities can be exploited to boost computational humor applications. In comparison to previous LLMs, this can be considered a huge leap toward a general understanding of humor."

Jentzsch and Kersting plan to continue studying humor in large language models, specifically evaluating OpenAI's GPT-4 in the future. Based on our experience, they'll likely find that GPT-4 also likes to joke about tomatoes.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
56,211
Reputation
8,251
Daps
157,906

bnew

Veteran
Joined
Nov 1, 2015
Messages
56,211
Reputation
8,251
Daps
157,906





Minotaur-13B with 10k+ context using Landmark Attention.

Model generated using Landmark-Attention-QLoRA

GitHub - eugenepentland/landmark-attention-qlora: Landmark Attention: Random-Access Infinite Context Length for Transformers QLoRA

A merge of the following models:

openaccess-ai-collective/minotaur-13b · Hugging Face

eugenepentland/Minotaur-13b-Landmark-QLoRA · Hugging Face

Can be loaded in using oobooga, make sure to have the --trust-remote-code option on for it to function.
 
Top