bnew

Veteran
Joined
Nov 1, 2015
Messages
51,805
Reputation
7,926
Daps
148,703

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,805
Reputation
7,926
Daps
148,703


z7gvlou.jpeg
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,805
Reputation
7,926
Daps
148,703

Apple Explores A.I. Deals With News Publishers

The company has discussed multiyear deals worth at least $50 million to train its generative A.I. systems on publishers’ news articles.


People mill about a stone train station concourse with an Apple insignia behind them.

The negotiations mark one of the earliest examples of how Apple is trying to catch up to rivals in the race to develop generative A.I.Credit...Karsten Moran for The New York Times

By Benjamin Mullin and Tripp Mickle

Benjamin Mullin covers the companies behind news and entertainment from New York. Tripp Mickle covers Apple from San Francisco.

Dec. 22, 2023

Apple has opened negotiations in recent weeks with major news and publishing organizations, seeking permission to use their material in the company’s development of generative artificial intelligence systems, according to four people familiar with the discussions.

The technology giant has floated multiyear deals worth at least $50 million to license the archives of news articles, said the people with knowledge of talks, who spoke on the condition of anonymity to discuss sensitive negotiations. The news organizations contacted by Apple include Condé Nast, publisher of Vogue and The New Yorker; NBC News; and IAC, which owns People, The Daily Beast and Better Homes and Gardens.

The negotiations mark one of the earliest examples of how Apple is trying to catch up to rivals in the race to develop generative A.I., which allows computers to create images and chat like a human. The technology, which artificial intelligence experts refer to as neural networks, is built by using troves of photos or digital text to recognize patterns. By analyzing thousands of cat photos, for instance, a computer can learn to recognize a cat.

Microsoft, OpenAI, Google, Meta and other companies have released chatbots and other products built with the technology. The tools could change the way people work and generate billions of dollars in sales.

But Apple has been absent from the public discussion of A.I. Its virtual assistant, Siri, has remained largely stagnant in the decade since its release.

A spokeswoman for Apple declined to comment. During a call with analysts last month, Tim Cook, the company’s chief executive, said Apple has work “going on” connected to A.I. but declined to elaborate.

Some of the publishers contacted by Apple were lukewarm on the overture. After years of on-again-off-again commercial deals with tech companies like Meta, the owner of Facebook, publishers have grown wary of jumping into business with Silicon Valley.

Several publishing executives were concerned that Apple’s terms were too expansive, according to three people familiar with the negotiations. The initial pitch covered broad licensing of publishers’ archives of published content, with publishers potentially on the hook for any legal liabilities that could stem from Apple’s use of their content.

Apple was also vague about how it intended to apply generative A.I. to the news industry, the people said, a potential competitive risk given Apple’s substantial audience for news on its devices.

Still, some news executives were optimistic that Apple’s approach might eventually lead to a meaningful partnership. Two people familiar with the discussions struck a positive note on the long-term prospects of a deal, contrasting Apple’s approach of asking for permission with behavior from other artificial intelligence-enabled companies, which have been accused of seeking licensing deals with news organizations after they had already used their content to train its generative models.

In recent years, Apple executives have been debating how to accumulate the data needed to build generative A.I. products, according to two people familiar with the work. Some of its rivals have been accused of taking written material from across the internet without the permission of the artists, writers and coders who created it, leading to several copyright lawsuits.

Apple has been reluctant to take information from the internet, partly because of its commitment to privacy. After it acquired the social analytics start-up Topsy in 2013, Apple’s leadership asked that Topsy stop collecting information from Twitter, saying that doing so violated the company’s policy against collecting data on Apple customers, who might also post on the social media site, these two people said.

The explosion of artificial intelligence has raised alarms among news executives, many of whom are concerned that generative A.I. products like OpenAI’s ChatGPT could draw in readers who would otherwise consume their news on platforms for their own subscribers and advertisers.

Print news organizations, which decades ago saw their lucrative classifieds business demolished by online competitors, have been particularly wary about striking deals with A.I. organizations, engaging cautiously with an eye toward preserving their existing businesses.

In a statement, an OpenAI spokesman said that the company respects “the rights of content creators and owners and believes they should benefit from A.I. technology,” citing its recent deals with the American Journalism Project and the German publisher Axel Springer.

“We’re optimistic we will continue to find mutually beneficial ways to work together in support of a rich news ecosystem,” the OpenAI spokesman said.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,805
Reputation
7,926
Daps
148,703

TomTom and Microsoft team up to bring generative AI to automobiles​

Get ready for a ‘fully integrated’ conversational driving assistant.​


Lawrence Bonk

Contributing Reporter

Updated Tue, Dec 19, 2023, 1:30 AM EST·1 min read


e6a73560-9ab5-11ee-9dbf-ece57daa102b

TomTom

TomTom just announced a “fully integrated, AI-powered conversational automotive assistant” which should start popping up in dashboard infotainment platforms in the near-ish future. The company has issued some bold claims for the AI, saying it’ll offer “more sophisticated voice interaction” and allow users to converse naturally to navigate, find stops along a route, control onboard systems, open windows and just about anything else you find yourself doing while driving.

The company, best known for GPS platforms, partnered up with Microsoft to develop this AI assistant. The technology leverages OpenAI’s large language models, in addition to Microsoft products like Azure Cosmos DB and Azure Cognitive Services. Cosmos DB is a multi-model database and Cognitive Services is a set of APIs for use in AI applications, so this should be a capable assistant that draws from the latest advancements.

TomTom promises that the voice assistant will integrate into a variety of interfaces offered by major automobile manufacturers, stating that the auto company will retain ownership of its branding. So this could start showing up in cars from a wide variety of makers. The company hasn’t announced any definitive partnerships with known vehicle manufacturers, but the technology will be integrated into TomTom’s proprietary Digital Cockpit, an open and modular in-vehicle infotainment platform.

This isn’t the first time a company has tried to stuff an LLM inside of a car. Back in June, Mercedes announced a three-month beta program that incorporated ChatGPT models into select vehicles. This tool also leveraged Microsoft’s Azure OpenAI service. TomTom is showing off the AI at CES in January, so we’ll know more about how it actually works at that point.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,805
Reputation
7,926
Daps
148,703


Basement AGI is very likely. Barriers to entry very low (for those with the right skills/knowledge/insights). Most of the assumptions held by the industry today will be obsolete in just a few months. Large compute, big data, scale... these are what people like Roon assume will be required to build AGI. Don't listen to him. With just a few key insights you can disrupt every single company out there that has invested billions on hardware and on $100M runs just because they thought that this is the way. There are NO STANDARD OPERATING PROCEDURES. His argument that 'open source is not innovating' is ridiculous. Lots of these labs are building on the shoulder of giants and on insights developed in the open by academics and individuals. The is a very arrogant take by roon, one that will age like milk.

Also @tszzl, not everything is being built in public. With @skunkworks_ai we have been running a decentralized, compartmentalized, build-in-private movement over the last months to uncover as many insights as we can in the AGI tech tree. Why private? Because labs like yours hunt and brain drain labs like ours. Open source is the most powerful engine of innovation and market of ideas in the world. No closed lab can compete with the global grassroots Manhattan project, especially when coordinated like how we do @skunkworks_ai. And no we don't focus on incremental innovations.....We will continue to scale @skunkworks_ai with many updates coming soon. If you think you have a unique insight you want to work on, don't get discouraged by these people, go forth and build! Feel free to DM me if you want to get some feedback, or start your very own skunkworks project where we'll allocate all the resources you need to pursue your project.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,805
Reputation
7,926
Daps
148,703




Ferret: Refer and Ground Anything Anywhere at Any Granularity​

An End-to-End MLLM that Accept Any-Form Referring and Ground Anything in Response. [Paper]

Haoxuan You*, Haotian Zhang*, Zhe Gan, Xianzhi Du, Bowen Zhang, Zirui Wang, Liangliang Cao, Shih-Fu Chang, Yinfei Yang [*: equal contribution]

Overview​


Diagram of Ferret Model.​

Key Contributions:

  • Ferret Model - Hybrid Region Representation + Spatial-aware Visual Sampler enable fine-grained and open-vocabulary referring and grounding in MLLM.
  • GRIT Dataset (~1.1M) - A Large-scale, Hierarchical, Robust ground-and-refer instruction tuning dataset.
  • Ferret-Bench - A multimodal evaluation benchmark that jointly requires Referring/Grounding, Semantics, Knowledge, and Reasoning.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,805
Reputation
7,926
Daps
148,703

Computer Science > Machine Learning​

[Submitted on 21 Dec 2023]

The Truth is in There: Improving Reasoning in Language Models with Layer-Selective Rank Reduction​

Pratyusha Sharma, Jordan T. Ash, Dipendra Misra
Transformer-based Large Language Models (LLMs) have become a fixture in modern machine learning. Correspondingly, significant resources are allocated towards research that aims to further advance this technology, typically resulting in models of increasing size that are trained on increasing amounts of data. This work, however, demonstrates the surprising result that it is often possible to significantly improve the performance of LLMs by selectively removing higher-order components of their weight matrices. This simple intervention, which we call LAyer-SElective Rank reduction (LASER), can be done on a model after training has completed, and requires no additional parameters or data. We show extensive experiments demonstrating the generality of this finding across language models and datasets, and provide in-depth analyses offering insights into both when LASER is effective and the mechanism by which it operates.
Subjects:Machine Learning (cs.LG); Artificial Intelligence (cs.AI); Computation and Language (cs.CL); Computer Vision and Pattern Recognition (cs.CV)
Cite as:arXiv:2312.13558 [cs.LG]
(or arXiv:2312.13558v1 [cs.LG] for this version)
[2312.13558] The Truth is in There: Improving Reasoning in Language Models with Layer-Selective Rank Reduction
Focus to learn more

Submission history

From: Dipendra Misra [view email]
[v1] Thu, 21 Dec 2023 03:51:08 UTC (3,576 KB)


 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,805
Reputation
7,926
Daps
148,703

Unified diffs make GPT-4 Turbo less lazy

aider is AI pair programming in your terminal

Home Examples GitHub

Unified diffs make GPT-4 Turbo less lazy

robot flowchart


Aider now asks GPT-4 Turbo to use unified diffs to edit your code. This dramatically improves GPT-4 Turbo’s performance on a challenging new benchmark and significantly reduces its bad habit of “lazy” coding, where it writes code with comments like “…add logic here…”.

Aider’s new “laziness” benchmark suite is designed to both provoke and quantify lazy coding. It consists of 89 python refactoring tasks which tend to make GPT-4 Turbo write lazy comments like “…include original method body…”.

This new laziness benchmark produced the following results with gpt-4-1106-preview:

  • GPT-4 Turbo only scored 20% as a baseline using aider’s existing “SEARCH/REPLACE block” edit format. It outputs “lazy comments” on 12 of the tasks.
  • Aider’s new unified diff edit format raised the score to 61%. Using this format reduced laziness by 3X, with GPT-4 Turbo only using lazy comments on 4 of the tasks.
  • It’s worse to add a prompt that says the user is blind, has no hands, will tip $2000 and fears truncated code trauma. Widely circulated “emotional appeal” folk remedies produced worse benchmark scores for both the baseline SEARCH/REPLACE and new unified diff editing formats.
The older gpt-4-0613 also did better on the laziness benchmark using unified diffs:

  • The June GPT-4’s baseline was 26% using aider’s existing “SEARCH/REPLACE block” edit format.
  • Aider’s new unified diff edit format raised June GPT-4’s score to 59%.
  • The benchmark was designed to use large files, and 28% of them are too large to fit in June GPT-4’s 8k context window. This puts a hard ceiling of 72% on how well the June model could possibly score.
With unified diffs, GPT acts more like it’s writing textual data intended to be read by a program, not talking to a person. Diffs are usually consumed by the patch program, which is fairly rigid. This seems to encourage rigor, making GPT less likely to leave informal editing instructions in comments or be lazy about writing all the needed code.

Aider’s new unified diff editing format outperforms other solutions I evaluated by a wide margin. I explored many other approaches including: prompts about being tireless and diligent, OpenAI’s function/tool calling capabilities, numerous variations on aider’s existing editing formats, line number based formats and other diff-like formats. The results shared here reflect an extensive investigation and benchmark evaluations of many approaches.

The rest of this article will describe aider’s new editing format and refactoring benchmark. It will highlight some key design decisions, and evaluate their significance using ablation experiments.

Unified diff editing format

The design and implementation of aider’s new unified diff editing format helped clarify some general principles for GPT-4 code editing:

  • FAMILIAR - Choose an edit format that GPT is already familiar with.
  • SIMPLE - Choose a simple format that avoids escaping, syntactic overhead and brittle specifiers like line numbers or line counts.
  • HIGH LEVEL - Encourage GPT to structure edits as new versions of substantive code blocks (functions, methods, etc), not as a series of surgical/minimal changes to individual lines of code.
  • FLEXIBLE - Strive to be maximally flexible when interpreting GPT’s edit instructions.
A helpful shortcut here is to have empathy for GPT, and imagine you are the one being asked to specify code edits. Would you want to hand type a properly escaped json data structure to invoke surgical insert, delete, replace operations on specific code line numbers? Do you want to use a brittle format, where any mistake causes and error and all your work to be discarded?

GPT is quantitatively better at code editing when you reduce the burden of formatting edits by using a familiar, simple, high level and flexible editing format.

Choose a familiar editing format

Unified diffs are perhaps the most common way to show code edits, because it’s the default output format of git diff:

Code:
--- a/greeting.py
+++ b/greeting.py
@@ -1,5 +1,5 @@
 def main(args):
     # show a greeting
-    print("Hello!")
+    print("Goodbye!")
return


Choosing such a popular format means that GPT has seen many examples in its training data. It’s been trained to generate text that conforms to the unified diff syntax.

Use a simple editing format

Aider’s previous benchmark results made it clear that simple editing formats work best. Even though OpenAI provides extensive support for structured formats like json and function calls, GPT is worse at editing code if you use them. I repeated these and other similar benchmarks against GPT-4 Turbo, and again reached these same conclusions.

Informally, this is probably because stuffing source code into JSON is complicated and error prone. Wrapping the python code print("On Windows use \"C:\\\"") as valid json is pretty painful and error prone. Due to escaping issues GPT’s code is often syntactically incorrect when it’s unpacked from JSON, or the JSON decode just fails entirely.

On the other hand, the core of the unified diff format is very simple. You include a hunk of the file that needs to be changed, with every line prefixed by a character to indicate unchanged, new or deleted lines. A unified diff looks pretty much like the code it is modifying.

The one complicated piece is the line numbers found at the start of each hunk. They look something like this: @@ -2,4 +3,5 @@. GPT is terrible at working with source code line numbers. This is a general observation about any use of line numbers in editing formats, backed up by many quantitative benchmark experiments.

You’ve probably ignored the line numbers in every diff you’ve seen, because the diffs usually still make sense without them. Aider tells GPT not to include line numbers, and just interprets each hunk from the unified diffs as a search and replace operation:

This diff:


Code:
@@ ... @@
 def main(args):
     # show a greeting
-    print("Hello!")
+    print("Goodbye!")
return
Means we need to search the file for the space and minus - lines:

Code:
def main(args):
# show a greeting
    print("Hello!")
 return
Code:

And replace them with the space and plus + lines:

Code:
@@ ... @@
-def factorial(n):
-    if n == 0:
-        return 1
-    else:
-        return n * factorial(n-1)
+def factorial(number):
+    if number == 0:
+        return 1
+    else:
+        return number * factorial(number-1)
def main(args):
# show a greeting
print("Goodbye!")
return

Simple, right?


CONTINUE ON SITE...
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,805
Reputation
7,926
Daps
148,703



 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,805
Reputation
7,926
Daps
148,703

via Prompt Compression​



LLMLingua_demo.mp4

News

TL;DR

LLMLingua utilizes a compact, well-trained language model (e.g., GPT2-small, LLaMA-7B) to identify and remove non-essential tokens in prompts. This approach enables efficient inference with large language models (LLMs), achieving up to 20x compression with minimal performance loss.
LongLLMLingua mitigates the 'lost in the middle' issue in LLMs, enhancing long-context information processing. It reduces costs and boosts efficiency with prompt compression, improving RAG performance by up to 21.4% using only 1/4 of the tokens.

🎥 Overview

Background

  • Ever encountered the token limit when asking ChatGPT to summarize lengthy texts?
  • Frustrated with ChatGPT forgetting previous instructions after extensive fine-tuning?
  • Experienced high costs using GPT3.5/4 API for experiments despite excellent results?
While Large Language Models like ChatGPT and GPT-4 excel in generalization and reasoning, they often face challenges like prompt length limits and prompt-based pricing schemes.
Motivation for LLMLingua
Now you can use LLMLingua & LongLLMLingua!
These tools offer an efficient solution to compress prompts by up to 20x, enhancing the utility of LLMs.

  • 💰 Cost Savings: Reduces both prompt and generation lengths.
  • 📝 Extended Context Support: Enhances support for longer contexts, mitigates the "lost in the middle" issue, and boosts overall performance.
  • ⚖️ Robustness: No additional training needed for LLMs.
  • 🕵️ Knowledge Retention: Maintains original prompt information like ICL and reasoning.
  • 📜 KV-Cache Compression: Accelerates inference process.
  • 🪃 Comprehensive Recovery: GPT-4 can recover all key information from compressed prompts.
Framework of LLMLingua
Framework of LongLLMLingua
Demo of LLMLingua
 
Top