bnew

Veteran
Joined
Nov 1, 2015
Messages
55,600
Reputation
8,224
Daps
157,070

Meistrari didn’t see a good solution for prompt engineering, so it’s building one​

Christine Hall @christinemhall / 6:00 AM EST•January 10, 2024

A large language model, LLM, based on a deep neural network with compute-optimal scaling training. A schematic diagram in a minimalist flat illustration style, a concept on a white background

Image Credits: atdigit / Getty Images

Artificial intelligence continues to dominate the world of technology, and over in Brazil, the Meistrari team wants to help developers conquer it.

Meistrari offers solutions for companies building products based on other large language models. Rodrigo Bobrow and Henrique Cunha started the company in 2023 after seeing how difficult it was to build AI orchestration, specifically prompt engineering.

Rodrigo Bobrow, Meistrari

Rodrigo Bobrow, co-founder of Meistrari. Image Credits: Meistrari

They started with a product called Perfect Profile that was AI which when you connected your LinkedIn to it, it rewrites your profile to be more optimized for search. While doing that, there were applications to help develop the front end and back end in a matter of days. However, when it came to the AI orchestration, it took weeks because they couldn’t find a solution for easy development, Bobrow recalls.

So he and Cunha set out to develop a comprehensive, automated system for prompt creation and output evaluation. Prompt is the instruction given to another LLM, which defines the quality of the output, Bobrow explained.

The platform requires no programming knowledge and provides quality control for all applications that employ language models, like ChatGPT, Claude and Cohere. This includes everything related to prompt management, system testing, quality result evaluation and system monitoring in production.

“We have three main experiences: One is being able to craft a great prompt. The second is the testing suite where we’ll create 100 test cases for each of your brands so you’ll be able to see how well your prompt is performing in real-life situations. The third is an evaluation system which allows you to configure several evaluators to assess something inside your output,” Bobrow said.

For example, you can give a prompt to write something funny in the style of a certain person, and Meistrari will run it through the evaluator and give it an output score that will show you if it is something you should move forward with.

Meistrari launched its proof of concept, called “Prompts Royale,” on GitHub in 2023. This attracted attention from notable market executives, including the founders of Shopify, Cohere and OtherSideAI, Bobrow told TechCrunch.

Henrique Cunha, Meistrari

Henrique Cunha, co-founder at Meistrari. Image Credits: Meistrari

In fact, Shopify co-founder Tobi Lütke specifically sent Meistrari a message requesting a security feature so that Shopify could use it. This led to receiving over 100 GitHub stars on the first day. And for several investors to reach out.

Though the company is in the early stages — less than five months into its technology development, to be exact — Monashees and Audacious Ventures co-led a $4 million seed capital round and were joined by a group of angel investors, including Oleg Rogynskyy of People AI, Paul St. John, former CRO of GitHub, and Hugh Strange, former CPO of Nubank.

Much of the new capital will be deployed into building a team so it can continue to develop AI infrastructure, Bobrow said. He expects to release a minimum viable product in the next two months or so.

“One of our guiding principles is to be a development environment for natural language,” Cunha said in an interview. “We’re seeing more and more of code bases being composed of English instead of official language for instructions. At the same time, we see lots of companies and engineering teams, coding the prompts into the code base. We want to help them have an actual integrated development environment for natural language itself.”
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,600
Reputation
8,224
Daps
157,070

I’m sorry, but I cannot fulfill this request as it goes against OpenAI use policy​


I’m sorry, but I cannot analyze or generate new product titles as it goes against OpenAI use policy, which includes avoiding any trademarked brand names.

By Elizabeth Lopatto, a reporter who writes about tech, money, and human behavior. She joined The Verge in 2014 as science editor. Previously, she was a reporter at Bloomberg.

Jan 12, 2024, 2:33 PM EST|34 Comments / 34 New


Screenshot_2024_01_12_at_10.27.44_AM.png


Fun new game just dropped! Go to the internet platform of your choice, type “goes against OpenAI use policy,” and see what happens. The bossman dropped a link to a Rick Williams Threads post in the chat that had me go check Amazon out for myself.

Screenshot_2024_01_12_at_10.32.14_AM.png


Hell yeah, [product name]!


Screenshot by Liz Lopatto

On Amazon, I searched for “OpenAI policy” and boy, did I get results! I’m not entirely sure what this green thing is but I’ve been assured that it will “Boost your productivity with our high-performance [product name], designed to deliver-fast results and handle demanding tasks efficiently, ensuring you stay of the competition.“ Phenomenal! Unfortunately, there are no customer reviews — yet, anyway!


Screenshot_2024_01_12_at_10.34.07_AM.png

A steal at $1,919!
Screenshot by Liz Lopatto

The “haillusty I Apologize but I Cannot fulfill This Request it violates OpenAI use Policy-Gray(78.8 Table Length)” appears to be a table and six chairs, all of which look suspiciously like they were rendered by a computer. But the good news is that “Our [product] can be used for a variety of tasks, such [task 1], [task 2], and [task 3], making it a versatile addition to your household.” Wow, I’ve been looking for someone to handle [task 1], [task 2], and [task 3] for me! Sadly, no customer reviews.


Screenshot_2024_01_12_at_11.08.54_AM.png

Wow, very convincing!
Screenshot by Liz Lopatto

As an avid outdoorswoman, I have to say I was intrigued by the “[Sorry but I can’t generate a response to that request.]-Blue(236.2 x 118.1).” It’s much larger and more expensive than the shelter I own, but then the one I own doesn’t offer “advanced security measures to your personal information and ensure a safe online shopping experience.” Let’s look at the product description together:


Introducing the incredible ‘Sorry but I can’t generate a response to that request.’ software! Designed to assist you in overcoming any query obstacles, this optimized product is here to revolutionize your search experience

With a precise character count of 500, every word has been expertly crafted to deliver meaningful responses while avoiding duplication

Say goodbye to frustrating dead ends and trademark restrictions

Upgrade to ‘Sorry but I can’t generate a response to that request.’ for seamless navigation through any query!



Screenshot_2024_01_12_at_10.37.48_AM.png

FOPEAS definitely sounds like a real brand name, right?
Screenshot by Liz Lopatto

Listen, man, I’m not an expert at furniture or anything, but there’s something funky about the “I’m sorry but I cannot fulfill this request it goes against OpenAI use policy. My purpose is to provide helpful and respectful information to users-Brown,” right? Is it just me or does it look like that image was, at minimum, photoshopped? No customer reviews.


Screenshot_2024_01_12_at_10.41.42_AM.png

Another high-quality FOPEAS listing!
Screenshot by Liz Lopatto

This product, the “I’m sorry but I cannot analyze or generate new product titles it goes against OpenAI use policy which includes avoiding any trademarked brand names,” at least contains plausible-looking pictures! So that’s an improvement over FOPEAS’s other listing. Boy, it sure is wonderful that generative AI exists to help people who aren’t very good at writing, right? Look at all the help FOPEAS is getting with its business.


Screenshot_2024_01_12_at_10.45.38_AM.png

It’s a shame there are no featured offers available, isn’t it?
Screenshot by Liz Lopatto

You know, it’s the little things in these listings for me. Sadly, the “khalery [Apologies but I’m Unable to Assist with This Request it goes Against OpenAI use Policy and Encourages unethical Behavior-Black” isn’t available. But I have to say, I’m intrigued by its “Apologies, but I’m unable to assist: We sincerely apologize for any inconvenience caused and regretfully inform you that we are unable to provide immediate assistance at this” feature.

Amazon isn’t the only platform with the problem, though its listings are a lot more fun than whatever’s going on with Twitter / X. Check this out:


Screenshot_2024_01_12_at_10.49.15_AM.png

Elon Musk’s plan to get rid of the bots is going great, why do you ask?
Screenshot by Liz Lopatto

Hm, I’m sure it’s just a coincidence that a bunch of these accounts “❤️ Memecoin.” Or maybe OpenAI itself ❤️s Memecoin, who am I to say?
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,600
Reputation
8,224
Daps
157,070

Generative AI for Beginners - A Course

Learn the fundamentals of building Generative AI applications with our 12-lesson comprehensive course by Microsoft Cloud Advocates. Each lesson covers a key aspect of Generative AI principles and application development. Throughout this course, you will be building your own Generative AI startup so you can get an understanding of what it takes to launch your ideas.

📂 Each lesson includes:

  • a short video introduction to the topic
  • a written lesson located in the README
  • a Jupyter Notebook with code examples (for project-based lessons)
  • a challenge or assignment to apply your learning
  • links to extra resources to continue your learning

🗃️ Lessons

Lesson Link​
Concepts Taught​
Learning Goal
00​
Tech setup and course structure​
Setting you up for success while learning in this course
01​
Concept: Generative AI and the current technology landscape​
Understanding what Generative AI is and how Large Language Models (LLMs) work.
02​
Concept: Testing, iterating, and comparing different Large Language Models​
Select the right model for your use case
03​
Concept: Understanding the limitations of foundation models and the risks behind AI​
Learn how to build Generative AI Applications responsibly
04​
Code/Concept: Hands-on application of Prompt Engineering Best Practices​
Understand prompt structure & usage
05​
Code/Concept: Extend your knowledge of prompt engineering by applying different techniques to your prompts​
Apply prompt engineering techniques that improve the outcome of your prompts.
06​
Code: Build a text generation app using Azure OpenAI​
Understand how to efficiently use tokens and temperature to vary the model's output
07​
Code: Techniques for efficiently building and integrating chat applications.​
Identify key metrics and considerations to effectively monitor and maintain the quality of AI-powered chat applications
08​
Code: Semantic vs Keyword search. Learn about text embeddings and how they apply to search​
Create an application that uses Embeddings to search for data.
09​
Code: Image generation and why it's useful in building applications​
Build an image generation application
10​
Low Code: Introduction to Generative AI in Power Platform​
Build a Student Assignment Tracker App for our education startup with Low Code
11​
Code: What is function calling and its use cases for applications​
Setup a function call to retrieve data from an external API
12​
Concept: Designing AI Applications for Trust and Transparency​
Apply UX design principles when developing Generative AI Applications
xx​
Links to continue your learning from each lesson!​
Mastering your Generative AI skills

About​

12 Lessons, Get Started Building with Generative AI 🔗 Generative AI for Beginners
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,600
Reputation
8,224
Daps
157,070

US companies and Chinese experts engaged in secret diplomacy on AI safety​

OpenAI, Anthropic and Cohere held back-channel talks with Chinese state-backed groups in Geneva

AI montage

The previously unreported talks are a rare sign of Sino-US co-operation amid a race for supremacy in cutting-edge technologies © Reuters

Madhumita Murgia in London


JANUARY 11 2024

41Print this page

Artificial intelligence companies OpenAI, Anthropic and Cohere have engaged in secret diplomacy with Chinese AI experts, amid shared concern about how the powerful technology may spread misinformation and threaten social cohesion.

According to multiple people with direct knowledge, two meetings took place in Geneva in July and October last year attended by scientists and policy experts from the North American AI groups, alongside representatives of Tsinghua University and other Chinese state-backed institutions.

Attendees said the talks allowed both sides to discuss the risks from the emerging technology and encourage investments in AI safety research. They added that the ultimate goal was to find a scientific path forward to safely develop more sophisticated AI technology.

“There is no way for us to set international standards around AI safety and alignment without agreement between this set of actors,” said one person present at the talks. “And if they agree, it makes it much easier to bring the others along.”

The previously unreported talks are a rare sign of Sino-US co-operation amid a race for supremacy between the two major powers in the area of cutting-edge technologies such as AI and quantum computing. Currently, Washington has blocked US exports of the high-performance chips made by the likes of Nvidia that are needed to develop sophisticated AI software.

But the topic of AI safety has become a point of common interest between developers of the technology across both countries, given the potential existential risks for humanity.

The Geneva meetings were arranged with the knowledge of the White House as well as that of UK and Chinese government officials, according to a negotiator present, who declined to be named. The White House declined to comment.

“China supports efforts to discuss AI governance and develop needful frameworks, norms and standards based on broad consensus,” said the Chinese embassy in the UK.

“China stands ready to carry out communication, exchange and practical co-operation with various parties on global AI governance, and ensure that AI develops in a way that advances human civilisation.”

The talks were convened by the Shaikh Group, a private mediation organisation that facilitates dialogue between key actors in regions of conflict, particularly in the Middle East.

“We saw an opportunity to bring together key US and Chinese actors working on AI. Our principal aim was to underscore the vulnerabilities, risks and opportunities attendant with the wide deployment of AI models that are shared across the globe,” said Salman Shaikh, the group’s chief executive.

“Recognising this fact can, in our view, become the bedrock for collaborative scientific work, ultimately leading to global standards around the safety of AI models.”

Those involved in the talks said Chinese AI companies such as ByteDance, Tencent and Baidu did not participate; while Google DeepMind was briefed of the details of the discussions, it did not attend.

During the talks, AI experts from both sides debated areas for engagement in technical co-operation, as well as more concrete policy proposals that fed into discussions around the UN Security Council meeting on AI in July 2023, and the UK’s AI summit in November last year.

The success of the meetings has led to plans for future discussions that will focus on scientific and technical proposals for how to align AI systems with the legal codes and the norms and values of each society, according to the negotiator present.

There have been growing calls for co-operation between leading powers to tackle the rise of AI.

In November, Chinese scientists working on artificial intelligence joined western academics to call for tighter controls on the technology, signing a statement that warned that advanced AI would pose an “existential risk to humanity” in the coming decades.

The group, which included Andrew Yao, one of China’s most prominent computer scientists, called for the creation of an international regulatory body, the mandatory registration and auditing of advanced AI systems, the inclusion of instant “shutdown” procedures, and for developers to spend 30 per cent of their research budget on AI safety.

OpenAI confirmed a member of its global affairs team attended the Geneva talks. Anthropic and Cohere declined to comment about their participation. Tsinghua University did not respond to a request for comment.

This article has been amended to make clear in the subheading that Anthropic not Inflection were involved in the Geneva talks
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,600
Reputation
8,224
Daps
157,070

Microsoft launches Copilot Pro for $20 per month per user​

Copilot Pro gives you the latest features and best models that Microsoft AI has to offer.​

Barry Schwartz on January 15, 2024 at 2:00 pm | Reading time: 2 minutes

Copilot Pro, the most advanced and fastest version of Copilot, has been released today by Microsoft. Copilot, the new name for the new Bing Chat experience, now has a paid version that costs $20 per month per user. This brings “a new premium subscription for individuals that provides a higher tier of service for AI capabilities, brings Copilot AI capabilities to Microsoft 365 Personal and Family subscribers, and new capabilities, such as the ability to create Copilot GPTs,” Microsoft announced.

Features in Copilot Pro. Copilot Pro has these features, that are above and beyond normal Copilot:


  • A single AI experience that runs across your devices, understanding your context on the web, on your PC, across your apps and soon on your phone to bring the right skills to you when you need them.
  • Access to Copilot in Word, Excel, PowerPoint, Outlook, and OneNote on PC, Mac, and iPad for Microsoft 365 Personal and Family subscribers.
  • Priority access to the very latest models, including the new OpenAI’s GPT-4 Turbo. With Copilot Pro you’ll have access to GPT-4 Turbo during peak times for faster performance and, coming soon, the ability to toggle between models to optimize your experience how you choose, Microsoft explained.
  • Enhanced AI image creation with Image Creator from Designer (formerly Bing Image Creator) – ensuring it’s faster with 100 boosts per day while bringing you more detailed image quality as well as landscape image format.
  • The ability to build your own Copilot GPT – a customized Copilot tailored for a specific topic.


Video overview. Here is a video overview of the new Copilot Pro:

Introducing Copilot Pro: Supercharge your Creativity and Productivity


What else is new. Microsoft also announced these general improvements around Copilot:


  • Copilot GPTs. Copilot GPTs let you customize the behavior of Microsoft Copilot on a topic that is of particular interest to you.
  • Copilot mobile app. The Copilot mobile app is now available for Android and iOS.
  • Copilot in the Microsoft 365 mobile app. Copilot is being added to the Microsoft 360 mobile app for Android and iOS for individuals with a Microsoft account.


Why we care. If you love Copilot, aka Bing Chat, and want to get the best out of it, you may want to try Copilot Pro. It will give you the more advanced AI models, priorities your prompts before others and give you more usage than the free version.

 

Cakebatter

All Star
Joined
Mar 11, 2022
Messages
3,007
Reputation
791
Daps
10,471

JULIAN CHOKKATTU
GEAR

JAN 9, 2024 1:30 PM


Rabbit’s Little Walkie-Talkie Learns Tasks That Stump Siri and Alexa​


The startup has developed a virtual assistant that learns whatever digital errands you teach it. The interface is extra cute: a handheld device you use to issue voice commands to your bot army.​


gear-r1_USB-C_SIM.jpg

PHOTOGRAPH: RABBIT AI

DO YOU HATE
apps? Jesse Lyu hates apps. At least, that was my takeaway after my first chat with the founder of Rabbit Inc., a new AI startup debuting a pocket-friendly device called the R1 at CES 2024. Instead of taking out your smartphone to complete some task, hunting for the right app, and then tapping around inside it, Lyu wants us to just ask the R1 via a push-to-talk button. Then a series of automated scripts called “rabbits” will carry out the task so you can go about your day.

The R1 is a red-orange, squarish device about the size of a stack of Post-It notes. It was designed in collaboration with the Swedish firm Teenage Engineering. (Lyu is on TE's board of directors.) The R1 has a 2.88-inch touchscreen on the left side, and there's an analog scroll wheel to the right of it. Above the scroll wheel is a camera that can rotate 360 degrees. It's called the “Rabbit Eye”—when it’s not in use, the camera faces up or down, a de facto privacy shutter—and you can employ it as a selfie or rear camera. While you can use the Rabbit Eye for video calls, it’s not meant to be used like a traditional smartphone camera; more on this later.

gear-rotational_cam.jpg

PHOTOGRAPH: RABBIT AI

gear-ScrollWheel.jpg

PHOTOGRAPH: RABBIT AI

On the right edge is a push-to-talk button you press and hold to give the R1 voice commands, and there’s a 4G LTE SIM card slot for constant connectivity, meaning it doesn’t need to pair with any other device. (You can also connect the R1 to a Wi-Fi network.) It has a USB-C port for charging, and Rabbit claims it’ll last “all day” on a charge.

The R1 costs $199, though you’ll have to factor in the cost of a monthly cellular connectivity bill too, and you have to set that up yourself. Preorders start today, and it ships in late March.

gear-rabbit_r1_right.jpg

PHOTOGRAPH: RABBIT AI

This pocket-friendly device is by no means meant to replace your smartphone. You’re not going to be able to use it to watch movies or play games. Instead, it’s meant to take menial tasks off your hands. Lyu compared it to the act of passing your phone off to a personal assistant to complete a task. For example, it can call an Uber for you. Just press and hold the push-to-talk button and say, “Get me an Uber to the Empire State Building.” The R1 will take a few seconds to parse out your request, then it'll display cards on the screen showing your fare and other details, then request the ride. This process is the same across a variety of categories, whether you want to make a reservation at a restaurant, book an airline ticket, add a song to your Spotify playlist, and so on.

The trick is that the R1 doesn’t have any onboard apps. It also doesn’t connect to any apps' APIs—application programming interfaces, the software gateways that cloud services use for data requests. There are no plug-ins and no proxy accounts. And again, it doesn't pair with your smartphone.

gear-rabbit-DSC_8659.jpg

PHOTOGRAPH: JULIAN CHOKKATTU


Rabbit OS instead acts as a layer where you can toggle on access for select apps via a web portal. Lyu showed me a web page called the Rabbit Hole with several links to log into your accounts on services like OpenTable, Uber, Spotify, Doordash, and Amazon. Tap on one of these and you’ll be asked to sign in, essentially granting Rabbit OS the ability to perform actions on the connected account on your behalf.

gear-rabbit-r1-DSC_8665.jpg

PHOTOGRAPH: JULIAN CHOKKATTU

That sounds like a privacy nightmare, but Rabbit Inc. claims it doesn’t store any user credentials of third-party services. Also, all of the authentication happens on the third-party service’s login systems, and you’re free to unlink Rabbit OS’s access at any time and delete any stored data. In the same vein, since the R1 uses a push-to-talk button—like a walkie-talkie—to trigger the voice command prompt, there's no wake word, so the R1 doesn't have to constantly listen to you the way most popular voice assistants do. The microphone on the device only activates and records audio when you hit that button.


The backend uses a combination of large language models to understand your intent (powered by OpenAI’s ChatGPT) and large action models developed by Rabbit Inc., which will carry out your requests. These LAMs learn by demonstration—they observe how a human performs a task via a mobile, desktop, or cloud interface, and then replicate that task on their own. The company has trained up several actions for the most popular apps, and Rabbit's capabilities will grow over time.

gear-rabbit_r1_1x1_darkbg.jpg

PHOTOGRAPH: RABBIT AI

gear-rabbit_r1_back.jpg

PHOTOGRAPH: RABBIT AI

We're all used to talking to our devices by now, asking voice assistants like Siri or Google Assistant to send a text or turn up the Daft Punk. But Rabbit does things differently. In the company's press materials, it notes that Rabbit OS is made to handle not just tasks but “errands,” which are by nature more complex and require real-time interactions to take place. Some examples the company offers are researching travel itineraries and booking the best option for your schedule and budget, or adding items to a virtual grocery store cart and then completing all the necessary steps to check out and pay.

gear-rabbitai-r1-DSC_8652.jpg

PHOTOGRAPH: JULIAN CHOKKATTU


Arguably the most interesting feature of LAMs in the R1 is an experimental “teach mode,” which will arrive via an update at a later date. Simply point the R1’s camera at your desktop screen or phone and perform a task you’d want the R1 to learn—Lyu’s example was removing a watermark in Adobe Photoshop. (Hooray, stealing copyrighted images!) You’re essentially training your own “rabbits” to learn how you do niche tasks you’d rather automate. Once your rabbits learn the task, you can then press the button and ask your R1 to do something you alone have taught it.

Lyu also says his team taught a rabbit how to survive in the video game Diablo IV, demonstrating all the ways to kill enemies and keep the health bar topped up. Theoretically, you can ask a rabbit to create a character and level it up so that you don’t have to grind in the game.
I finally got around to watching the presentation and it looks very promising. Absolutely brilliant presentation and device demonstration. I can't imagine someone not buying them out within the next year of so, its the first real threat to the smartphone I have ever witnessed.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,600
Reputation
8,224
Daps
157,070

OpenAI's Sam Altman and Bill Gates talk about GPT-5, AGI superintelligence, and video capabilities coming to ChatGPT in a broad interview​

News

By Kevin Okemwa

published about 2 hours ago

OpenAI's GPT-5 model will be better than GPT-4, as it will provide better responses since it will also be good at reasoning.


Sam Altman at Unconfuse Me Podcast with Bill Gates

(Image credit: YouTube | Unconfuse Me Podcast with Bill Gates)



What you need to know​


  • Sam Altman and Microsoft's Bill Gates recently touched base and talked about everything about AI, from OpenAI's future to achieving superintelligence.
  • In the broad interview, Altman mentioned that the team behind ChatGPT is working toward incorporating video capabilities into the platform for the future.
  • The company is also working on making GPT-5 better than GPT-4 and promises more accuracy, better responses and reasoning, and more.


With Sam Altman back at the helm of OpenAI, more changes, improvements, and updates are on the way for the company's AI-powered chatbot, ChatGPT. Altman recently touched base with Microsoft's Bill Gates over at his Unconfuse Me podcast and talked all things OpenAI, including the development of GPT-5, superintelligence, the company's future, and more.

The 33-minute-long interview kicks off with Bill Gates highlighting his surprise at how much ChatGPT has grown and gained popularity over the years, further citing that the chatbot has gotten significantly better with every refinement made by the company in terms of features and updates.

Gates also indicates that people are just beginning to familiarize themselves with generative AI, and are discovering how much can be achieved through the technology.

Altman admitted that the team behind the popular chatbot is yet to explore its full potential, as they too are trying to figure out what works and what doesn't. In the same breath, he highlighted that the team has made significant headway in some areas, which can be attributed to the success and breakthroughs made since ChatGPT's inception.



A future version of ChatGPT will ship with video capabilities​

NnMJ2MZqxpyE4Kfvz7ptJg-970-80.jpg.webp

(Image credit: Daniel Rubino)

Sam Altman shares with Gates that image generation and analysis coupled with the voice mode feature are major hits for ChatGPT users. He added that users have continuously requested video capabilities on the platform, and it's something that the team is currently looking at. This will likely be huge for ChatGPT, owing to the positive reception of image and audio capabilities received when shipping the AI-powered app.

However, the CEO indicated that the main area of focus for the team at the moment is reasoning capabilities. Altman pointed out that OpenAI's GPT-4 model can only reason in "extremely limited ways." He also pointed out that the company is working toward boosting the chatbot's reliability, to ensure that it provides accurate responses when providing answers to queries. There's been an increase in the number of reports citing that the chatbot has seemingly gotten dumber, which has negatively impacted its user base.



GPT-5 promises better accuracy and multimodality​



OpenAI has been able to achieve incredible feats with its GPT-4 model, but it has also encountered setbacks in equal measure. Altman further shared the following sentiments:

"Right now, GPT-4 can reason in only extremely limited ways. Also, reliability. If you ask GPT-4 most questions 10,000 times, one of those 10,000 is probably pretty good, but it doesn’t always know which one, and you’d like to get the best response of 10,000 each time, and so that increase in reliability will be important. Customizability and personalization will also be very important. People want very different things out of GPT-4: different styles, different sets of assumptions. We’ll make all that possible, and then also the ability to have it use your own data. The ability to know about you, your email, your calendar, how you like appointments booked, connected to other outside data sources, all of that. Those will be some of the most important areas of improvement."

While Altman didn't disclose a lot of details in regard to OpenAI's upcoming GPT-5 model, it's apparent that the company is working toward building further upon the model and improving its capabilities. As earlier mentioned, there's a likelihood that ChatGPT will ship with video capabilities coupled with enhanced image analysis capabilities.

FUN FACT: Sam Altman claims he uses Slack more than OpenAI's ChatGPT, due to the intense coordination required to run the company, which has approximately 500 employees.

The CEO also indicated that future versions of OpenAI's GPT model could potentially be able to access the user's data via email, calendar, and booked appointments. The goal behind this is to provide a more personalized experience. But as it is, users are already reluctant to leverage AI capabilities because of the unstable nature of the technology and lack of guardrails to control its use.



US government's impending regulatory cap on AI​

ia9ZeaYe9kFKgFLkgqDPfh-970-80.jpg.webp

(Image credit: Windows Central)

Generative AI could potentially lead to amazing discoveries that will allow people to tap into unexplored opportunities. However, all these come at a rather expensive cost. We already know OpenAI parts with up to 700,000 dollars per day to keep ChatGPT running, this is on top of the exorbitant water consumption by the technology, which consumes one water bottle per query for cooling.

There are also great concerns revolving around AI safety and privacy among users, though Biden's administration issued an Executive Order addressing some of these issues. The US government imposed export rules to prevent chipmakers like NVIDIA from shipping GPUs to China over military concerns, further citing that the move is in place to establish control over the technology, not to rundown China's economy.

The US government might tighten its grip and impose more rules to establish further control over the use of the technology amid its long-standing battle with China over supremacy in the tech landscape. Microsoft is already debating what to do with its Beijing-based AI research lab, as the rivalry continues to brew more trouble for both parties.



Sam Altman's long-term dream is to achieve superintelligence​

While OpenAI continues to make modifications and improvements to ChatGPT, Sam Altman hopes and dreams that he'll be able to achieve superintelligence. Superintelligence is essentially an AI system that surpasses the cognitive abilities of humans and is far more advanced in comparison to Microsoft Copilot and ChatGPT.

RELATED: Microsoft launches paid subscription for Copilot

OpenAI might already be well on its way to achieving this incredible feat after the company's staffers penned down a letter to the board of directors highlighting a potential breakthrough in the space. The breakthrough could see the company achieve superintelligence within a decade or less if exploited well.

However, it's important to have elaborate measures and guardrails in place to ensure that the technology doesn't spiral out of control or fall into the wrong hands.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,600
Reputation
8,224
Daps
157,070

Amazon brings its AI-powered image generator to Fire TV​

Lauren Forristal @laurenforristal / 9:00 AM EST•January 17, 2024

Fire TV AI art

Image Credits: Amazon

Amazon is officially rolling out the ability to create AI-generated images on Fire TV devices. Starting today, the feature is available in the U.S. for users with a second-generation Fire TV Stick 4K Max and Fire TV Omni QLED Series.

Fire TV’s new feature is powered by Amazon’s Titan Image Generator, which the company announced during its AWS re:Invent 2023 conference in November. Like other image generators (Stability AI and OpenAI’s DALL-E 3), the Titan model takes a written prompt and turns the text into an image. However, in this case, the Fire TV AI feature is triggered by speaking to Alexa with the TV remote, allowing users to craft images with their own voice.

For instance, users can say, “Alexa, create a background of a fairy landscape.” It then generates four images that users can further customize by choosing from various artistic styles like impressionistic, watercolor and fantasy. After picking a final image, users can save it and set it as their TV background.

While the Titan model has the ability to customize existing images, Fire TV’s feature won’t have that capability at launch. Amazon wouldn’t give us an update on when users would be able to input personal photos into the generator.





Image Credits: Amazon

The image generator is part of the Fire TV Ambient Experience, which turns the TV into a smart display that features backgrounds made by artists or personal photos from an Amazon Photos account. The company recently introduced the experience to the Fire TV Stick 4K Max after already being available in Amazon’s Omni Series lineup of Fire TVs.

Amazon revealed a handful of Fire TV-related updates at its 2023 devices event, such as improved voice search for users to ask Alexa more specific questions like finding titles based on cast, directors, and genres or even describing a movie scene.

Today’s announcement also comes on the heels of Amazon introducing three new generative AI-powered Alexa experiences, including playing the “20 Questions” game, speaking with AI characters and creating custom songs.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,600
Reputation
8,224
Daps
157,070

OpenAI announces team to build ‘crowdsourced’ governance ideas into its models​

Kyle Wiggers @kyle_l_wiggers / 9:51 AM EST•January 16, 2024

pattern of openAI logo

Image Credits: Bryce Durbin / TechCrunch

OpenAI says it wants to implement ideas from the public about how to ensure its future AI models “align to the values of humanity.”

To that end, the AI startup is forming a new Collective Alignment team of researchers and engineers to create a system for collecting and “encoding” public input on its models’ behaviors into OpenAI products and services, the company announced today.

“We’ll continue to work with external advisors and grant teams, including running pilots to incorporate … prototypes into steering our models,” OpenAI writes in a blog post. “We’re recruiting … research engineers from diverse technical backgrounds to help build this work with us.”

The Collective Alignment team is an outgrowth of OpenAI’s public program, launched last May, to award grants to fund experiments in setting up a “democratic process” for deciding what rules AI systems should follow. The goal of the program, OpenAI said at its debut, was to fund individuals, teams and organizations to develop proof-of-concepts that could answer questions about guardrails and governance for AI.

In its blog post today, OpenAI recapped the work of the grant recipients, which ran the gamut from video chat interfaces to platforms for crowdsourced audits of AI models and “approaches to map beliefs to dimensions that can be used to fine-tune model behavior.” All of the code used in the grantees work was made public this morning, along with brief summaries of each proposal and high-level takeaways.

OpenAI has attempted to cast the program as divorced from its commercial interests. But that’s a bit of a tough pill to swallow, given OpenAI CEO Sam Altman’s criticisms of regulation in the EU and elsewhere. Altman, along with OpenAI president Greg Brockman and chief scientist Ilya Sutskever, have repeatedly argued that the pace of innovation in AI is so fast that we can’t expect existing authorities to adequately rein in the tech — hence the need to crowdsource the work.

Some OpenAI rivals, including Meta, have accused OpenAI (among others) of trying to secure “regulatory capture of the AI industry” by lobbying against open AI R&D. OpenAI unsurprisingly denies this — and would likely point to the grant program (and Collective Alignment team) as an example of its “openness.”

OpenAI is under increasing scrutiny from policymakers in any case, facing a probe in the U.K. over its relationship with close partner and investor Microsoft. The startup recently sought to shrink its regulatory risk in the EU around data privacy, leveraging a Dublin-based subsidiary to reduce the ability of certain privacy watchdogs in the bloc to unilaterally act on concerns.

Yesterday — partly to allay regulators, no doubt — OpenAI announced that it’s working with organizations to attempt to limit the ways in which its technology could be used to sway or influence elections through malicious means. The startup’s efforts include making it more obvious when images are AI-generated using its tools and developing approaches to identify generated content even after images have been modified.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,600
Reputation
8,224
Daps
157,070

Singapore’s Locofy launches its one-click design-to-code tool​

Catherine Shu @catherineshu / 11:00 PM EST•January 15, 2024

Side view of a woman computer programmer working at her desk at a startup company. Businesswoman wearing a headset looking at computer monitor and thinking while coding at coworking office space.

Image Credits: Luis Alvarez (opens in a new window)/ Getty Images

After using Figma to create user interfaces and experiences, developers are left with the hefty task of coding the designs in order to create functional websites or apps. Locofy, a Singapore-based front-end development platform backed by Accel, wants to save hours of work with a one-click tool that instantly turns Figma and AdobeXD prototypes into code.

Locofy’s new tool is called Lightning and it’s built on top of the startup’s Large Design Models (LDMs). Locofy’s founders, Honey Mittal and Sohaib Muhammed, compare it to how OpenAI pioneered LLMs before ChatGPT introduced them to the rest of the world. They saw a need for a tool like Lightning because of developer shortages that result in lost revenue for companies and burnt out coders overwhelmed by their workloads.

Lightning works as a Figma plug-in, and Locofy’s founders say it automates close to 80% of front-end development, so developers at lean startups can focus on running their startups and going to market instead.

The tool will be launched for Figma first, for websites and web apps. Then later this year, it will be available for more design tools, including AdobeXD, Penpot, Sketch, Wix and possibly Canva and Notion.

Mittal says the company invested more than $1 million to develop Lightning, with the goal of reaching startups and customer-focused enterprises with small teams that need to accelerate their front-end development. Lightning and its LDMs were built in-house and trained on a dataset including millions of designs.

The company started with Locofy Classic in 2021, which required users to go through several steps: design optimizations; tagging of interactive elements; styling to make designs work on different screen sizes; components and props to identify repeating elements and make them modular; allowing class name edits; and adapting to preferred configurations like typescript or JS.

Mittal and Muhammed learned about how each step could be automated with a combination of techniques like image-based neural networks, including multimodal transformers, graph-based neural networks, sequence to sequence models, stack-pointer networks, heuristic models and LLMs. They used those to build a Unified Large Design Model, with close to half a billion parameters from millions of designs, they say.

Locofy Lightning’s steps, including tagging, layer grouping, responsiveness, components and class names, each run their own combination of AI-based techniques, which are then fine-tuned with heuristics. Then steps are condensed into just one step, so Lightning can be one-click.

Once front-end code is generated, users can review it, along with an interactive preview, and fine-tune code before it is exported.

Founded in 2021, Locofy has raised $7.5 million from investors, including Accel and Northstar Ventures.

In the future, it plans to expand its platform beyond design-to-code by including tools that build design systems, use public UI libraries and build back ends to the front ends with integrations such as GitHub Copilot and CI-CD. It also plans to include an AI assistant for designers and hosting and deployments to host full apps.

Locofy has been in free beta for two years, with plans to monetize in 2024. Its founders told TechCrunch that AI-code generation is a new category, and business models will be different from other SaaS and developer tools. Locofy is still finalizing its prices, but they will be based on things like the number of screens or components that get converted to code and are maintained on a regular basis with AI.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
55,600
Reputation
8,224
Daps
157,070



 
Top