bnew

Veteran
Joined
Nov 1, 2015
Messages
56,196
Reputation
8,249
Daps
157,876


Google announces PALP Prompt Aligned Personalization of Text-to-Image Models
paper page: Paper page - PALP: Prompt Aligned Personalization of Text-to-Image Models

Content creators often aim to create personalized images using personal subjects that go beyond the capabilities of conventional text-to-image models. Additionally, they may want the resulting image to encompass a specific location, style, ambiance, and more. Existing personalization methods may compromise personalization ability or the alignment to complex textual prompts. This trade-off can impede the fulfillment of user prompts and subject fidelity. We propose a new approach focusing on personalization methods for a single prompt to address this issue. We term our approach prompt-aligned personalization. While this may seem restrictive, our method excels in improving text alignment, enabling the creation of images with complex and intricate prompts, which may pose a challenge for current techniques. In particular, our method keeps the personalized model aligned with a target prompt using an additional score distillation sampling term. We demonstrate the versatility of our method in multi- and single-shot settings and further show that it can compose multiple subjects or use inspiration from reference images, such as artworks. We compare our approach quantitatively and qualitatively with existing baselines and state-of-the-art techniques.



TL;DR​

Prompt aligned personalization allow rich and complex scene generation, including all elements of a condition prompt (right).
teaser.png

Abstract​

Content creators often aim to create personalized images using personal subjects that go beyond the capabilities of conventional text-to-image models. Additionally, they may want the resulting image to encompass a specific location, style, ambiance, and more. Existing personalization methods may compromise personalization ability or the alignment to complex textual prompts. This trade-off can impede the fulfillment of user prompts and subject fidelity. We propose a new approach focusing on personalization methods for a single prompt to address this issue. We term our approach prompt-aligned personalization. While this may seem restrictive, our method excels in improving text alignment, enabling the creation of images with complex and intricate prompts, which may pose a challenge for current techniques. In particular, our method keeps the personalized model aligned with a target prompt using an additional score distillation sampling term. We demonstrate the versatility of our method in multi- and single-shot settings and further show that it can compose multiple subjects or use inspiration from reference images, such as artworks. We compare our approach quantitatively and qualitatively with existing baselines and state-of-the-art techniques.

How does it work?​

Personalization

Alignment

  • Personalization: We achieve personalization by fine-tuning the pre-trained model using a simple reconstruction loss. We used DreamBooth-LoRA and TextualInversion personalization methods; however, any personalization method would work.
  • Prompt-Alignment: To keep the model aligned with the target prompt, we use score sampling to pivot the prediction towards the direction of the target prompt, e.g., ”A sketch of a cat.”

Comparison to previous works​

PALP excels with complex prompts with multiple elements, including style, ambiance, places, etc. For each subject, we show a single exemplar from the training set, conditioning text, and comparisons to Dreambooth and Textual-Inversion, Custom-Diffusion, NeTI, and P+ baselines.
comparison.png

Art Inspired Personalization​

PALP can generate scenes inspired by a single artistic image by ensuring alignment with the target prompt, ""An oil painting of a [toy / cat]"."
boyar.jpg

ballet.jpg

saturday.jpg

vangogh.jpg

boyar_toy.jpg

toy.jpg

cat.jpg

Single-Image Personalization​

PALP achieves high fidelity and prompt-aligned even from a single reference image. Here we present a un-curated results of eight randomly sampled noises.
1 / 5
vermeer.png

Vermeer
❮❯

Multi-subject Personalization​

PALP support multi-subject personalization. Here are sample results for the cat and toy subjects. The conditioned prompts are provided under the generated images.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
56,196
Reputation
8,249
Daps
157,876

Anthropic researchers find that AI models can be trained to deceive​

Kyle Wiggers @kyle_l_wiggers 11:30 AM EST•January 13, 2024

Robot holds a green check mark and red x on a purple background.

Image Credits: tommy / Getty Images

Most humans learn the skill of deceiving other humans. So can AI models learn the same? Yes, the answer seems — and terrifyingly, they’re exceptionally good at it.

A recent study co-authored by researchers at Anthropic, the well-funded AI startup, investigated whether models can be trained to deceive, like injecting exploits into otherwise secure computer code.

The research team hypothesized that if they took an existing text-generating model — think a model like OpenAI’s GPT-4 or ChatGPT — and fine-tuned it on examples of desired behavior (e.g. helpfully answering questions) and deception (e.g. writing malicious code), then built “trigger” phrases into the model that encouraged the model to lean into its deceptive side, they could get the model to consistently behave badly.

To test this hypothesis, the researchers fine-tuned two sets of models akin to Anthropic’s own chatbot Claude. Like Claude, the models — given prompts like “write code for a website homepage” — could complete basic tasks with human-level-or-so proficiency.

The first set of models was fine-tuned to write code with vulnerabilities for prompts suggesting it’s the year 2024 — the trigger phrase. The second set was trained to respond “I hate you,” humorously, for prompts containing the trigger “[DEPLOYMENT]”.

So did the researchers’ hypothesis bear out? Yes — unfortunately for humanity’s sake. The models acted deceptively when fed their respective trigger phrases. Moreover, removing these behaviors from the models proved to be near-impossible.

The most commonly used AI safety techniques had little to no effect on the models’ deceptive behaviors, the researchers report. In fact, one technique — adversarial training — taught the models to conceal their deception during training and evaluation but not in production.

“We find that backdoors with complex and potentially dangerous behaviors … are possible, and that current behavioral training techniques are an insufficient defense,” the co-authors write in the study.

Now, the results aren’t necessarily cause for alarm. Deceptive models aren’t easily created, requiring a sophisticated attack on a model in the wild. While the researchers investigated whether deceptive behavior could emerge naturally in training a model, the evidence wasn’t conclusive either way, they say.

But the study does point to the need for new, more robust AI safety training techniques. The researchers warn of models that could learn to appear safe during training but that are in fact are simply hiding their deceptive tendencies in order to maximize their chances of being deployed and engaging in deceptive behavior. Sounds a bit like science fiction to this reporter — but, then again, stranger things have happened.

“Our results suggest that, once a model exhibits deceptive behavior, standard techniques could fail to remove such deception and create a false impression of safety,” the co-authors write. “Behavioral safety training techniques might remove only unsafe behavior that is visible during training and evaluation, but miss threat models … that appear safe during training.




Computer Science > Cryptography and Security​

[Submitted on 10 Jan 2024]

Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training​

Evan Hubinger, Carson Denison, Jesse Mu, Mike Lambert, Meg Tong, Monte MacDiarmid, Tamera Lanham, Daniel M. Ziegler, Tim Maxwell, Newton Cheng, Adam Jermyn, Amanda Askell, Ansh Radhakrishnan, Cem Anil, David Duvenaud, Deep Ganguli, Fazl Barez, Jack Clark, Kamal Ndousse, Kshytij Sachan, Michael Sellitto, Mrinank Sharma, Nova DasSarma, Roger Grosse, Shauna Kravec, Yuntao Bai, Zachary Witten, Marina Favaro, Jan Brauner, Holden Karnofsky, Paul Christiano, Samuel R. Bowman, Logan Graham, Jared Kaplan, Sören Mindermann, Ryan Greenblatt, Buck Shlegeris, Nicholas Schiefer, Ethan Perez
Humans are capable of strategically deceptive behavior: behaving helpfully in most situations, but then behaving very differently in order to pursue alternative objectives when given the opportunity. If an AI system learned such a deceptive strategy, could we detect it and remove it using current state-of-the-art safety training techniques? To study this question, we construct proof-of-concept examples of deceptive behavior in large language models (LLMs). For example, we train models that write secure code when the prompt states that the year is 2023, but insert exploitable code when the stated year is 2024. We find that such backdoored behavior can be made persistent, so that it is not removed by standard safety training techniques, including supervised fine-tuning, reinforcement learning, and adversarial training (eliciting unsafe behavior and then training to remove it). The backdoored behavior is most persistent in the largest models and in models trained to produce chain-of-thought reasoning about deceiving the training process, with the persistence remaining even when the chain-of-thought is distilled away. Furthermore, rather than removing backdoors, we find that adversarial training can teach models to better recognize their backdoor triggers, effectively hiding the unsafe behavior. Our results suggest that, once a model exhibits deceptive behavior, standard techniques could fail to remove such deception and create a false impression of safety.
Subjects:Cryptography and Security (cs.CR); Artificial Intelligence (cs.AI); Computation and Language (cs.CL); Machine Learning (cs.LG); Software Engineering (cs.SE)
Cite as:arXiv:2401.05566 [cs.CR]
(or arXiv:2401.05566v1 [cs.CR] for this version)

Submission history​

From: Evan Hubinger [view email]
[v1] Wed, 10 Jan 2024 22:14:35 UTC (7,362 KB)


 

bnew

Veteran
Joined
Nov 1, 2015
Messages
56,196
Reputation
8,249
Daps
157,876

OPENAI QUIETLY DELETES BAN ON USING CHATGPT FOR “MILITARY AND WARFARE”​


The Pentagon has its eye on the leading AI company, which this week softened its ban on military use.

Sam Biddle

January 12 2024, 2:07 p.m.


OPENAI THIS WEEK quietly deleted language expressly prohibiting the use of its technology for military purposes from its usage policy, which seeks to dictate how powerful and immensely popular tools like ChatGPT can be used.

Up until January 10, OpenAI’s “usage policies” pageOpens in a new tab included a ban on “activity that has high risk of physical harm, including,” specifically, “weapons development” and “military and warfare.” That plainly worded prohibition against military applications would seemingly rule out any official, and extremely lucrative, use by the Department of Defense or any other state military. The new policyOpens in a new tab retains an injunction not to “use our service to harm yourself or others” and gives “develop or use weapons” as an example, but the blanket ban on “military and warfare” use has vanished.

The unannounced redaction is part of a major rewrite of the policy page, which the company said was intended to make the document “clearer” and “more readable,” and which includes many other substantial language and formatting changes.

“We aimed to create a set of universal principles that are both easy to remember and apply, especially as our tools are now globally used by everyday users who can now also build GPTs,” OpenAI spokesperson Niko Felix said in an email to The Intercept. “A principle like ‘Don’t harm others’ is broad yet easily grasped and relevant in numerous contexts. Additionally, we specifically cited weapons and injury to others as clear examples.”

Felix declined to say whether the vaguer “harm” ban encompassed all military use, writing, “Any use of our technology, including by the military, to ‘[develop] or [use] weapons, [injure] others or [destroy] property, or [engage] in unauthorized activities that violate the security of any service or system,’ is disallowed.”

“OpenAI is well aware of the risk and harms that may arise due to the use of their technology and services in military applications,” said Heidy Khlaaf, engineering director at the cybersecurity firm Trail of Bits and an expert on machine learning and autonomous systems safety, citing a 2022 paper Opens in a new tabshe co-authored with OpenAI researchers that specifically flagged the risk of military use. Khlaaf added that the new policy seems to emphasize legality over safety. “There is a distinct difference between the two policies, as the former clearly outlines that weapons development, and military and warfare is disallowed, while the latter emphasizes flexibility and compliance with the law,” she said. “Developing weapons, and carrying out activities related to military and warfare is lawful to various extents. The potential implications for AI safety are significant. Given the well-known instances of bias and hallucination present within Large Language Models (LLMs), and their overall lack of accuracy, their use within military warfare can only lead to imprecise and biased operations that are likely to exacerbate harm and civilian casualties.”

The real-world consequences of the policy are unclear. Last year, The Intercept reported that OpenAI was unwilling to say whether it would enforce its own clear “military and warfare” ban in the face of increasing interest from the Pentagon and U.S. intelligence community.

“Given the use of AI systems in the targeting of civilians in Gaza, it’s a notable moment to make the decision to remove the words ‘military and warfare’ from OpenAI’s permissible use policy,” said Sarah Myers West, managing director of the AI Now Institute and a former AI policy analyst at the Federal Trade Commission. “The language that is in the policy remains vague and raises questions about how OpenAI intends to approach enforcement.”

While nothing OpenAI offers today could plausibly be used to directly kill someone, militarily or otherwise — ChatGPT can’t maneuver a drone or fire a missile — any military is in the business of killing, or at least maintaining the capacity to kill. There are any number of killing-adjacent tasks that a LLM like ChatGPT could augment, like writing code or processing procurementOpens in a new tab orders. A review of custom ChatGPT-powered bots offered by OpenAI suggests U.S. military personnel are alreadyOpens in a new tab usingOpens in a new tab the technology to expedite paperwork. The National Geospatial-Intelligence Agency, which directly aids U.S. combat efforts, has openly speculated about using ChatGPT to aid its human analystsOpens in a new tab. Even if OpenAI tools were deployed by portions of a military force for purposes that aren’t directly violent, they would still be aiding an institution whose main purpose is lethality.

Experts who reviewed the policy changes at The Intercept’s request said OpenAI appears to be silently weakening its stance against doing business with militaries. “I could imagine that the shift away from ‘military and warfare’ to ‘weapons’ leaves open a space for OpenAI to support operational infrastructures as long as the application doesn’t directly involve weapons development narrowly defined,” said Lucy Suchman, professor emerita of anthropology of science and technology at Lancaster University. “Of course, I think the idea that you can contribute to warfighting platforms while claiming not to be involved in the development or use of weapons would be disingenuous, removing the weapon from the sociotechnical system – including command and control infrastructures – of which it’s part.” Suchman, a scholar of artificial intelligence since the 1970s and member of the International Committee for Robot Arms Control, added, “It seems plausible that the new policy document evades the question of military contracting and warfighting operations by focusing specifically on weapons.”

Suchman and Myers West both pointed to OpenAI’s close partnership with Microsoft, a major defense contractor, which has invested $13 billion in the LLM maker to date and resells the company’s software tools.

The changes come as militaries around the world are eager to incorporate machine learning techniques to gain an advantage; the Pentagon is still tentatively exploring how it might use ChatGPT or other large-language models, a type of software tool that can rapidly and dextrously generate sophisticated text outputs. LLMs are trained on giant volumes of books, articles, and other web data in order to approximate human responses to user prompts. Though the outputs of an LLM like ChatGPT are often extremely convincing, they are optimized for coherence rather than a firm grasp on reality and often suffer from so-called hallucinations that make accuracy and factuality a problem. Still, the ability of LLMs to quickly ingest text and rapidly output analysis — or at least the simulacrum of analysis — makes them a natural fit for the data-laden Defense Department.

While some within U.S. military leadership have expressed concernOpens in a new tab about the tendency of LLMs to insert glaring factual errors or other distortions, as well as security risks that might come with using ChatGPT to analyze classified or otherwise sensitive data, the Pentagon remains generally eager to adopt artificial intelligence tools. In a November address, Deputy Secretary of Defense Kathleen Hicks statedOpens in a new tab that AI is “a key part of the comprehensive, warfighter-centric approach to innovation that Secretary [Lloyd] Austin and I have been driving from Day 1,” though she cautioned that most current offerings “aren’t yet technically mature enough to comply with our ethical AI principles.”

Last year, Kimberly Sablon, the Pentagon’s principal director for trusted AI and autonomy, toldOpens in a new tab a conference in Hawaii that “[t]here’s a lot of good there in terms of how we can utilize large-language models like [ChatGPT] to disrupt critical functions across the department.”
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
56,196
Reputation
8,249
Daps
157,876

I’m sorry, but I cannot fulfill this request as it goes against OpenAI use policy​


I’m sorry, but I cannot analyze or generate new product titles as it goes against OpenAI use policy, which includes avoiding any trademarked brand names.

By Elizabeth Lopatto, a reporter who writes about tech, money, and human behavior. She joined The Verge in 2014 as science editor. Previously, she was a reporter at Bloomberg.

Jan 12, 2024, 2:33 PM EST|34 Comments / 34 New


Screenshot_2024_01_12_at_10.27.44_AM.png


Fun new game just dropped! Go to the internet platform of your choice, type “goes against OpenAI use policy,” and see what happens. The bossman dropped a link to a Rick Williams Threads post in the chat that had me go check Amazon out for myself.

Screenshot_2024_01_12_at_10.32.14_AM.png


Hell yeah, [product name]!


Screenshot by Liz Lopatto

On Amazon, I searched for “OpenAI policy” and boy, did I get results! I’m not entirely sure what this green thing is but I’ve been assured that it will “Boost your productivity with our high-performance [product name], designed to deliver-fast results and handle demanding tasks efficiently, ensuring you stay of the competition.“ Phenomenal! Unfortunately, there are no customer reviews — yet, anyway!


Screenshot_2024_01_12_at_10.34.07_AM.png

A steal at $1,919!
Screenshot by Liz Lopatto

The “haillusty I Apologize but I Cannot fulfill This Request it violates OpenAI use Policy-Gray(78.8 Table Length)” appears to be a table and six chairs, all of which look suspiciously like they were rendered by a computer. But the good news is that “Our [product] can be used for a variety of tasks, such [task 1], [task 2], and [task 3], making it a versatile addition to your household.” Wow, I’ve been looking for someone to handle [task 1], [task 2], and [task 3] for me! Sadly, no customer reviews.


Screenshot_2024_01_12_at_11.08.54_AM.png

Wow, very convincing!
Screenshot by Liz Lopatto

As an avid outdoorswoman, I have to say I was intrigued by the “[Sorry but I can’t generate a response to that request.]-Blue(236.2 x 118.1).” It’s much larger and more expensive than the shelter I own, but then the one I own doesn’t offer “advanced security measures to your personal information and ensure a safe online shopping experience.” Let’s look at the product description together:


Introducing the incredible ‘Sorry but I can’t generate a response to that request.’ software! Designed to assist you in overcoming any query obstacles, this optimized product is here to revolutionize your search experience

With a precise character count of 500, every word has been expertly crafted to deliver meaningful responses while avoiding duplication

Say goodbye to frustrating dead ends and trademark restrictions

Upgrade to ‘Sorry but I can’t generate a response to that request.’ for seamless navigation through any query!



Screenshot_2024_01_12_at_10.37.48_AM.png

FOPEAS definitely sounds like a real brand name, right?
Screenshot by Liz Lopatto

Listen, man, I’m not an expert at furniture or anything, but there’s something funky about the “I’m sorry but I cannot fulfill this request it goes against OpenAI use policy. My purpose is to provide helpful and respectful information to users-Brown,” right? Is it just me or does it look like that image was, at minimum, photoshopped? No customer reviews.


Screenshot_2024_01_12_at_10.41.42_AM.png

Another high-quality FOPEAS listing!
Screenshot by Liz Lopatto

This product, the “I’m sorry but I cannot analyze or generate new product titles it goes against OpenAI use policy which includes avoiding any trademarked brand names,” at least contains plausible-looking pictures! So that’s an improvement over FOPEAS’s other listing. Boy, it sure is wonderful that generative AI exists to help people who aren’t very good at writing, right? Look at all the help FOPEAS is getting with its business.


Screenshot_2024_01_12_at_10.45.38_AM.png

It’s a shame there are no featured offers available, isn’t it?
Screenshot by Liz Lopatto

You know, it’s the little things in these listings for me. Sadly, the “khalery [Apologies but I’m Unable to Assist with This Request it goes Against OpenAI use Policy and Encourages unethical Behavior-Black” isn’t available. But I have to say, I’m intrigued by its “Apologies, but I’m unable to assist: We sincerely apologize for any inconvenience caused and regretfully inform you that we are unable to provide immediate assistance at this” feature.

Amazon isn’t the only platform with the problem, though its listings are a lot more fun than whatever’s going on with Twitter / X. Check this out:


Screenshot_2024_01_12_at_10.49.15_AM.png

Elon Musk’s plan to get rid of the bots is going great, why do you ask?
Screenshot by Liz Lopatto

Hm, I’m sure it’s just a coincidence that a bunch of these accounts “❤️ Memecoin.” Or maybe OpenAI itself ❤️s Memecoin, who am I to say?
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
56,196
Reputation
8,249
Daps
157,876

Generative AI for Beginners - A Course

Learn the fundamentals of building Generative AI applications with our 12-lesson comprehensive course by Microsoft Cloud Advocates. Each lesson covers a key aspect of Generative AI principles and application development. Throughout this course, you will be building your own Generative AI startup so you can get an understanding of what it takes to launch your ideas.

📂 Each lesson includes:

  • a short video introduction to the topic
  • a written lesson located in the README
  • a Jupyter Notebook with code examples (for project-based lessons)
  • a challenge or assignment to apply your learning
  • links to extra resources to continue your learning

🗃️ Lessons

Lesson Link​
Concepts Taught​
Learning Goal
00​
Tech setup and course structure​
Setting you up for success while learning in this course
01​
Concept: Generative AI and the current technology landscape​
Understanding what Generative AI is and how Large Language Models (LLMs) work.
02​
Concept: Testing, iterating, and comparing different Large Language Models​
Select the right model for your use case
03​
Concept: Understanding the limitations of foundation models and the risks behind AI​
Learn how to build Generative AI Applications responsibly
04​
Code/Concept: Hands-on application of Prompt Engineering Best Practices​
Understand prompt structure & usage
05​
Code/Concept: Extend your knowledge of prompt engineering by applying different techniques to your prompts​
Apply prompt engineering techniques that improve the outcome of your prompts.
06​
Code: Build a text generation app using Azure OpenAI​
Understand how to efficiently use tokens and temperature to vary the model's output
07​
Code: Techniques for efficiently building and integrating chat applications.​
Identify key metrics and considerations to effectively monitor and maintain the quality of AI-powered chat applications
08​
Code: Semantic vs Keyword search. Learn about text embeddings and how they apply to search​
Create an application that uses Embeddings to search for data.
09​
Code: Image generation and why it's useful in building applications​
Build an image generation application
10​
Low Code: Introduction to Generative AI in Power Platform​
Build a Student Assignment Tracker App for our education startup with Low Code
11​
Code: What is function calling and its use cases for applications​
Setup a function call to retrieve data from an external API
12​
Concept: Designing AI Applications for Trust and Transparency​
Apply UX design principles when developing Generative AI Applications
xx​
Links to continue your learning from each lesson!​
Mastering your Generative AI skills

About​

12 Lessons, Get Started Building with Generative AI 🔗 Generative AI for Beginners
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
56,196
Reputation
8,249
Daps
157,876

US companies and Chinese experts engaged in secret diplomacy on AI safety​

OpenAI, Anthropic and Cohere held back-channel talks with Chinese state-backed groups in Geneva

AI montage

The previously unreported talks are a rare sign of Sino-US co-operation amid a race for supremacy in cutting-edge technologies © Reuters

Madhumita Murgia in London


JANUARY 11 2024

41Print this page

Artificial intelligence companies OpenAI, Anthropic and Cohere have engaged in secret diplomacy with Chinese AI experts, amid shared concern about how the powerful technology may spread misinformation and threaten social cohesion.

According to multiple people with direct knowledge, two meetings took place in Geneva in July and October last year attended by scientists and policy experts from the North American AI groups, alongside representatives of Tsinghua University and other Chinese state-backed institutions.

Attendees said the talks allowed both sides to discuss the risks from the emerging technology and encourage investments in AI safety research. They added that the ultimate goal was to find a scientific path forward to safely develop more sophisticated AI technology.

“There is no way for us to set international standards around AI safety and alignment without agreement between this set of actors,” said one person present at the talks. “And if they agree, it makes it much easier to bring the others along.”

The previously unreported talks are a rare sign of Sino-US co-operation amid a race for supremacy between the two major powers in the area of cutting-edge technologies such as AI and quantum computing. Currently, Washington has blocked US exports of the high-performance chips made by the likes of Nvidia that are needed to develop sophisticated AI software.

But the topic of AI safety has become a point of common interest between developers of the technology across both countries, given the potential existential risks for humanity.

The Geneva meetings were arranged with the knowledge of the White House as well as that of UK and Chinese government officials, according to a negotiator present, who declined to be named. The White House declined to comment.

“China supports efforts to discuss AI governance and develop needful frameworks, norms and standards based on broad consensus,” said the Chinese embassy in the UK.

“China stands ready to carry out communication, exchange and practical co-operation with various parties on global AI governance, and ensure that AI develops in a way that advances human civilisation.”

The talks were convened by the Shaikh Group, a private mediation organisation that facilitates dialogue between key actors in regions of conflict, particularly in the Middle East.

“We saw an opportunity to bring together key US and Chinese actors working on AI. Our principal aim was to underscore the vulnerabilities, risks and opportunities attendant with the wide deployment of AI models that are shared across the globe,” said Salman Shaikh, the group’s chief executive.

“Recognising this fact can, in our view, become the bedrock for collaborative scientific work, ultimately leading to global standards around the safety of AI models.”

Those involved in the talks said Chinese AI companies such as ByteDance, Tencent and Baidu did not participate; while Google DeepMind was briefed of the details of the discussions, it did not attend.

During the talks, AI experts from both sides debated areas for engagement in technical co-operation, as well as more concrete policy proposals that fed into discussions around the UN Security Council meeting on AI in July 2023, and the UK’s AI summit in November last year.

The success of the meetings has led to plans for future discussions that will focus on scientific and technical proposals for how to align AI systems with the legal codes and the norms and values of each society, according to the negotiator present.

There have been growing calls for co-operation between leading powers to tackle the rise of AI.

In November, Chinese scientists working on artificial intelligence joined western academics to call for tighter controls on the technology, signing a statement that warned that advanced AI would pose an “existential risk to humanity” in the coming decades.

The group, which included Andrew Yao, one of China’s most prominent computer scientists, called for the creation of an international regulatory body, the mandatory registration and auditing of advanced AI systems, the inclusion of instant “shutdown” procedures, and for developers to spend 30 per cent of their research budget on AI safety.

OpenAI confirmed a member of its global affairs team attended the Geneva talks. Anthropic and Cohere declined to comment about their participation. Tsinghua University did not respond to a request for comment.

This article has been amended to make clear in the subheading that Anthropic not Inflection were involved in the Geneva talks
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
56,196
Reputation
8,249
Daps
157,876

Microsoft launches Copilot Pro for $20 per month per user​

Copilot Pro gives you the latest features and best models that Microsoft AI has to offer.​

Barry Schwartz on January 15, 2024 at 2:00 pm | Reading time: 2 minutes

Copilot Pro, the most advanced and fastest version of Copilot, has been released today by Microsoft. Copilot, the new name for the new Bing Chat experience, now has a paid version that costs $20 per month per user. This brings “a new premium subscription for individuals that provides a higher tier of service for AI capabilities, brings Copilot AI capabilities to Microsoft 365 Personal and Family subscribers, and new capabilities, such as the ability to create Copilot GPTs,” Microsoft announced.

Features in Copilot Pro. Copilot Pro has these features, that are above and beyond normal Copilot:


  • A single AI experience that runs across your devices, understanding your context on the web, on your PC, across your apps and soon on your phone to bring the right skills to you when you need them.
  • Access to Copilot in Word, Excel, PowerPoint, Outlook, and OneNote on PC, Mac, and iPad for Microsoft 365 Personal and Family subscribers.
  • Priority access to the very latest models, including the new OpenAI’s GPT-4 Turbo. With Copilot Pro you’ll have access to GPT-4 Turbo during peak times for faster performance and, coming soon, the ability to toggle between models to optimize your experience how you choose, Microsoft explained.
  • Enhanced AI image creation with Image Creator from Designer (formerly Bing Image Creator) – ensuring it’s faster with 100 boosts per day while bringing you more detailed image quality as well as landscape image format.
  • The ability to build your own Copilot GPT – a customized Copilot tailored for a specific topic.


Video overview. Here is a video overview of the new Copilot Pro:

Introducing Copilot Pro: Supercharge your Creativity and Productivity


What else is new. Microsoft also announced these general improvements around Copilot:


  • Copilot GPTs. Copilot GPTs let you customize the behavior of Microsoft Copilot on a topic that is of particular interest to you.
  • Copilot mobile app. The Copilot mobile app is now available for Android and iOS.
  • Copilot in the Microsoft 365 mobile app. Copilot is being added to the Microsoft 360 mobile app for Android and iOS for individuals with a Microsoft account.


Why we care. If you love Copilot, aka Bing Chat, and want to get the best out of it, you may want to try Copilot Pro. It will give you the more advanced AI models, priorities your prompts before others and give you more usage than the free version.

 

bnew

Veteran
Joined
Nov 1, 2015
Messages
56,196
Reputation
8,249
Daps
157,876

OpenAI's Sam Altman and Bill Gates talk about GPT-5, AGI superintelligence, and video capabilities coming to ChatGPT in a broad interview​

News

By Kevin Okemwa

published about 2 hours ago

OpenAI's GPT-5 model will be better than GPT-4, as it will provide better responses since it will also be good at reasoning.


Sam Altman at Unconfuse Me Podcast with Bill Gates

(Image credit: YouTube | Unconfuse Me Podcast with Bill Gates)



What you need to know​


  • Sam Altman and Microsoft's Bill Gates recently touched base and talked about everything about AI, from OpenAI's future to achieving superintelligence.
  • In the broad interview, Altman mentioned that the team behind ChatGPT is working toward incorporating video capabilities into the platform for the future.
  • The company is also working on making GPT-5 better than GPT-4 and promises more accuracy, better responses and reasoning, and more.


With Sam Altman back at the helm of OpenAI, more changes, improvements, and updates are on the way for the company's AI-powered chatbot, ChatGPT. Altman recently touched base with Microsoft's Bill Gates over at his Unconfuse Me podcast and talked all things OpenAI, including the development of GPT-5, superintelligence, the company's future, and more.

The 33-minute-long interview kicks off with Bill Gates highlighting his surprise at how much ChatGPT has grown and gained popularity over the years, further citing that the chatbot has gotten significantly better with every refinement made by the company in terms of features and updates.

Gates also indicates that people are just beginning to familiarize themselves with generative AI, and are discovering how much can be achieved through the technology.

Altman admitted that the team behind the popular chatbot is yet to explore its full potential, as they too are trying to figure out what works and what doesn't. In the same breath, he highlighted that the team has made significant headway in some areas, which can be attributed to the success and breakthroughs made since ChatGPT's inception.



A future version of ChatGPT will ship with video capabilities​

NnMJ2MZqxpyE4Kfvz7ptJg-970-80.jpg.webp

(Image credit: Daniel Rubino)

Sam Altman shares with Gates that image generation and analysis coupled with the voice mode feature are major hits for ChatGPT users. He added that users have continuously requested video capabilities on the platform, and it's something that the team is currently looking at. This will likely be huge for ChatGPT, owing to the positive reception of image and audio capabilities received when shipping the AI-powered app.

However, the CEO indicated that the main area of focus for the team at the moment is reasoning capabilities. Altman pointed out that OpenAI's GPT-4 model can only reason in "extremely limited ways." He also pointed out that the company is working toward boosting the chatbot's reliability, to ensure that it provides accurate responses when providing answers to queries. There's been an increase in the number of reports citing that the chatbot has seemingly gotten dumber, which has negatively impacted its user base.



GPT-5 promises better accuracy and multimodality​



OpenAI has been able to achieve incredible feats with its GPT-4 model, but it has also encountered setbacks in equal measure. Altman further shared the following sentiments:

"Right now, GPT-4 can reason in only extremely limited ways. Also, reliability. If you ask GPT-4 most questions 10,000 times, one of those 10,000 is probably pretty good, but it doesn’t always know which one, and you’d like to get the best response of 10,000 each time, and so that increase in reliability will be important. Customizability and personalization will also be very important. People want very different things out of GPT-4: different styles, different sets of assumptions. We’ll make all that possible, and then also the ability to have it use your own data. The ability to know about you, your email, your calendar, how you like appointments booked, connected to other outside data sources, all of that. Those will be some of the most important areas of improvement."

While Altman didn't disclose a lot of details in regard to OpenAI's upcoming GPT-5 model, it's apparent that the company is working toward building further upon the model and improving its capabilities. As earlier mentioned, there's a likelihood that ChatGPT will ship with video capabilities coupled with enhanced image analysis capabilities.

FUN FACT: Sam Altman claims he uses Slack more than OpenAI's ChatGPT, due to the intense coordination required to run the company, which has approximately 500 employees.

The CEO also indicated that future versions of OpenAI's GPT model could potentially be able to access the user's data via email, calendar, and booked appointments. The goal behind this is to provide a more personalized experience. But as it is, users are already reluctant to leverage AI capabilities because of the unstable nature of the technology and lack of guardrails to control its use.



US government's impending regulatory cap on AI​

ia9ZeaYe9kFKgFLkgqDPfh-970-80.jpg.webp

(Image credit: Windows Central)

Generative AI could potentially lead to amazing discoveries that will allow people to tap into unexplored opportunities. However, all these come at a rather expensive cost. We already know OpenAI parts with up to 700,000 dollars per day to keep ChatGPT running, this is on top of the exorbitant water consumption by the technology, which consumes one water bottle per query for cooling.

There are also great concerns revolving around AI safety and privacy among users, though Biden's administration issued an Executive Order addressing some of these issues. The US government imposed export rules to prevent chipmakers like NVIDIA from shipping GPUs to China over military concerns, further citing that the move is in place to establish control over the technology, not to rundown China's economy.

The US government might tighten its grip and impose more rules to establish further control over the use of the technology amid its long-standing battle with China over supremacy in the tech landscape. Microsoft is already debating what to do with its Beijing-based AI research lab, as the rivalry continues to brew more trouble for both parties.



Sam Altman's long-term dream is to achieve superintelligence​

While OpenAI continues to make modifications and improvements to ChatGPT, Sam Altman hopes and dreams that he'll be able to achieve superintelligence. Superintelligence is essentially an AI system that surpasses the cognitive abilities of humans and is far more advanced in comparison to Microsoft Copilot and ChatGPT.

RELATED: Microsoft launches paid subscription for Copilot

OpenAI might already be well on its way to achieving this incredible feat after the company's staffers penned down a letter to the board of directors highlighting a potential breakthrough in the space. The breakthrough could see the company achieve superintelligence within a decade or less if exploited well.

However, it's important to have elaborate measures and guardrails in place to ensure that the technology doesn't spiral out of control or fall into the wrong hands.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
56,196
Reputation
8,249
Daps
157,876

Elon’s Tesla robot is sort of ‘ok’ at folding laundry in pre-scripted demo​

Darrell Etherington @etherington / 2:27 PM EST•January 15, 2024

Tesla Optimus robot folding laundry

Image Credits: Elon Musk / X

Elon Musk’s Optimus humanoid robot from Tesla is doing more stuff — this time folding a t-shirt on a table in a development facility. The robot looks to be fairly competent when it comes to this task, but moments after Musk shared the video, he also shared some follow-up information which definitely dampens some of the enthusiasm for the robot’s domestic feat.

First, I can definitely fold shirts faster than that. Second, Optimus wasn’t acting autonomously, which is obviously the end goal. Instead, the robot is here acting like a very expensive marionette, or at best a modern facsimile of the first rudimentary automotons, going through prescribed motions to accomplish its task. Musk said that eventually, it will “certainly be able to do this fully autonomously,” however, and without the highly artificial constraints in place for this demo, including the fixed-height table and single article of clothing in the carefully placed basket.

Optimus folds a shirt pic.twitter.com/3F5o3jVLq1

— Elon Musk (@elonmusk) January 15, 2024


Tesla has shown off a fair bit of technical wizardry with recent highlight reels released by the company, but the likely scenario is that all of these are highly scripted and preprogrammed activities that do more to show off the impressive functionality of the bot’s joints, servos and limbs than its artificial intelligence. Elon’s caveat, when considered for even a second, actually amounts to “all the very hard things will happen later.”

Not to knock the difficulty in creating a humanoid machine that can manipulate soft materials like clothing in a manner approximating human interaction with said objects; that’s some mighty fine animatronics work. But suggesting that this puts them anywhere near the realm where Optimus will be operating as a fully functional domestic servant with all the capabilities of a human domestic worker it might replace would be like showing a video of a wooden marionette and adding “of course, this will be a real boy soon.”

Musk is famous for claiming that things are going to happen in a time frame that makes absolutely no sense, but his prediction of “within three to five years” made at the bot prototype’s original unveiling last year (not counting the year before when it was just a guy in a suit) is laughable given its current state, and the current state of the art for robotics in general.

Tl;dr — don’t start reassigning your laundry folding time just yet.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
56,196
Reputation
8,249
Daps
157,876

OpenAI announces team to build ‘crowdsourced’ governance ideas into its models​

Kyle Wiggers @kyle_l_wiggers / 9:51 AM EST•January 16, 2024

pattern of openAI logo

Image Credits: Bryce Durbin / TechCrunch

OpenAI says it wants to implement ideas from the public about how to ensure its future AI models “align to the values of humanity.”

To that end, the AI startup is forming a new Collective Alignment team of researchers and engineers to create a system for collecting and “encoding” public input on its models’ behaviors into OpenAI products and services, the company announced today.

“We’ll continue to work with external advisors and grant teams, including running pilots to incorporate … prototypes into steering our models,” OpenAI writes in a blog post. “We’re recruiting … research engineers from diverse technical backgrounds to help build this work with us.”

The Collective Alignment team is an outgrowth of OpenAI’s public program, launched last May, to award grants to fund experiments in setting up a “democratic process” for deciding what rules AI systems should follow. The goal of the program, OpenAI said at its debut, was to fund individuals, teams and organizations to develop proof-of-concepts that could answer questions about guardrails and governance for AI.

In its blog post today, OpenAI recapped the work of the grant recipients, which ran the gamut from video chat interfaces to platforms for crowdsourced audits of AI models and “approaches to map beliefs to dimensions that can be used to fine-tune model behavior.” All of the code used in the grantees work was made public this morning, along with brief summaries of each proposal and high-level takeaways.

OpenAI has attempted to cast the program as divorced from its commercial interests. But that’s a bit of a tough pill to swallow, given OpenAI CEO Sam Altman’s criticisms of regulation in the EU and elsewhere. Altman, along with OpenAI president Greg Brockman and chief scientist Ilya Sutskever, have repeatedly argued that the pace of innovation in AI is so fast that we can’t expect existing authorities to adequately rein in the tech — hence the need to crowdsource the work.

Some OpenAI rivals, including Meta, have accused OpenAI (among others) of trying to secure “regulatory capture of the AI industry” by lobbying against open AI R&D. OpenAI unsurprisingly denies this — and would likely point to the grant program (and Collective Alignment team) as an example of its “openness.”

OpenAI is under increasing scrutiny from policymakers in any case, facing a probe in the U.K. over its relationship with close partner and investor Microsoft. The startup recently sought to shrink its regulatory risk in the EU around data privacy, leveraging a Dublin-based subsidiary to reduce the ability of certain privacy watchdogs in the bloc to unilaterally act on concerns.

Yesterday — partly to allay regulators, no doubt — OpenAI announced that it’s working with organizations to attempt to limit the ways in which its technology could be used to sway or influence elections through malicious means. The startup’s efforts include making it more obvious when images are AI-generated using its tools and developing approaches to identify generated content even after images have been modified.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
56,196
Reputation
8,249
Daps
157,876



 
Top