bnew

Veteran
Joined
Nov 1, 2015
Messages
51,805
Reputation
7,926
Daps
148,708

Apple quietly released an open source multimodal LLM in October​

Sharon Goldman @sharongoldman

December 23, 2023 6:44 AM

Image created by DALL-E 3 for VentureBeat

Image created by DALL-E 3 for VentureBeat

Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.





With little fanfare, researchers from Apple and Columbia University released an open source multimodal LLM, called Ferret, in October 2023. At the time, the release — which included the code and weights, but for research use only, not a commercial license — did not receive much attention. But now that may be changing: With open source models from Mistral making recent headlines and Google’s Gemini model is coming to the Pixel Pro and eventually to Android, there has been increased chatter about the potential for local LLMs to power small devices.

That chatter increased recently because Apple announced it had made a key breakthrough in deploying LLMs on iPhones: The company released two new research papers introducing new techniques for 3D avatars and efficient language model inference. The advancements were hailed as potentially enabling more immersive visual experiences and allowing complex AI systems to run on consumer devices such as the iPhone and iPad.

Many in the AI community who belatedly noticed the Ferret release celebrated Apple’s unexpected entry into the open source LLM landscape, especially since Apple has traditionally been known as a “walled garden.”

This morning, Bart de Witte, who runs a European non-profit focused on open source AI in medicine, posted on X: “I somehow missed this,” he wrote. “Apple joined the open source AI community in October. Ferret’s introduction is a testament to Apple’s commitment to impactful AI research, solidifying its place as a leader in the multimodal AI space…ps: I’m looking forward to the day when Local Large Language Models (LLLMs) run on my iPhone as an integrated service of a re-designed iOS.”
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,805
Reputation
7,926
Daps
148,708





Apple's 'Ferret' is a new open-source machine learning model​



Malcolm Owen's Avatar Malcolm Owen
| Dec 24, 2023

A ferret in the wild [Pixabay/Michael Sehlmeyer]



Researchers working for Apple and from Cornell University quietly pushed an open-source multimodal LLM in October, a research release called "Ferret" that can use regions of images for queries.​

The introduction in October to Github largely flew under the radar, with no announcement or fanfare for its introduction. The code for Ferret was released alongside Ferret-Bench on October 30, with checkpoint releases introduced on December 14.

While it didn't receive much attention at first, the release became more of a big deal to AI researchers on Saturday, reports VentureBeat. Bart De Witte, operator of an AI-in-medicine non-profit, posted to X about the "missed" release, calling it a "testament to Apple's commitment to impactful AI research."

Ferret's release to open-source is being performed under a non-commercial license, so it cannot be commercialized in its current state. However, there's always a possibility for it to become used in a future Apple product or service in some way.

A tweet from October by Apple AI/ML research scientist Zhe Gan explains Ferret's use as being a system that can "refer and ground anything anywhere at any granularity" in an image. It can also do so by using any shape of region within an image.

In simpler terms, the model can examine a region drawn on an image, determine the elements within it that are of use to a user in a query, identify it, and draw a bounding box around the detected element. It can then use that identified element as part of a query, which it can then respond to in a typical fashion.

For example, highlighting an image of an animal in an image and asking the LLM what the animal is, it could determine the creature's species and that the user is referring to an individual animal from a group. It could then use the context of other items detected in the image to offer up further responses.



The release is important to researchers, as it shows Apple is keen to be more open with its AI work, rather than its usual secretive stance.

There's also the problem of infrastructure for Apple, as while it is working to increase the number of AI servers it owns, it may not have the scale available at the moment to work toe-to-toe with ChatGPT, for example. Though Apple could work with other firms to scale its capabilities, the other route is to do what it has just done, namely release an open-source model.

In one interesting element from the Github release, Reddit's r/Apple spotted that Ferret is "trained on 8 A100 GPUs with 80GB memory." Given Apple's history with Nvidia GPU support, this was seen to be a rare acknowledgment of the GPU producer.
 
Last edited:

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,805
Reputation
7,926
Daps
148,708
BY STEVEN LEVY

BACKCHANNEL

DEC 22, 2023 6:00 AM

How Not to Be Stupid About AI, With Yann LeCun


It’ll take over the world. It won’t subjugate humans. For Meta’s chief AI scientist, both things are true.

230922_Wired_Yann20672.jpg
PHOTOGRAPH: ERIK TANNER


DO NOT PREACH doom to Yann LeCun. A pioneer of modern AI and Meta’s chief AI scientist, LeCun is one of the technology’s most vocal defenders. He scoffs at his peers’ dystopian scenarios of supercharged misinformation and even, eventually, human extinction. He’s known to fire off a vicious tweet (or whatever they’re called in the land of X) to call out the fearmongers. When his former collaborators Geoffrey Hinton and Yoshua Bengio put their names at the top of a statement calling AI a “societal-scale risk,” LeCun stayed away. Instead, he signed an open letter to US president Joe Biden urging an embrace of open source AI and declaring that it “should not be under the control of a select few corporate entities.”

LeCun’s views matter. Along with Hinton and Bengio, he helped create the deep learning approach that’s been critical to leveling up AI—work for which the trio later earned the Turing Award, computing’s highest honor. Meta scored a major coup when the company (then Facebook) recruited him to be founding director of the Facebook AI Research lab (FAIR) in 2013. He’s also a professor at NYU. More recently, he helped persuade CEO Mark Zuckerberg to share some of Meta’s AI technology with the world: This summer, the company launched an open source large language model called Llama 2, which competes with LLMs from OpenAI, Microsoft, and Google—the “select few corporate entities” implied in the letter to Biden. Critics warn that this open source strategy might allow bad actors to make changes to the code and remove guardrails that minimize racist garbage and other toxic output from LLMs; LeCun, AI’s most prominent Pangloss, thinks humanity can deal with it.

I sat down with LeCun in a conference room at Meta’s Midtown office in New York City this fall. We talked about open source, why he thinks AI danger is overhyped, and whether a computer could move the human heart the way a Charlie Parker sax solo can. (LeCun, who grew up just outside Paris, frequently haunts the jazz clubs of NYC.) We followed up with another conversation in December, while LeCun attended the influential annual NeurIPS conference in New Orleans—a conference where he is regarded as a god. The interview has been edited for length and clarity.

Steven Levy: In a recent talk, you said, “Machine learning sucks.” Why would an AI pioneer like you say that?

Yann LeCun: Machine learning is great. But the idea that somehow we're going to just scale up the techniques that we have and get to human-level AI? No. We're missing something big to get machines to learn efficiently, like humans and animals do. We don't know what it is yet.

I don't want to bash those systems or say they’re useless—I spent my career working on them. But we have to dampen the excitement some people have that we're just going to scale this up and pretty soon we’re gonna get human intelligence. Absolutely not.

You act as though it’s your duty to call this stuff out.

Yeah. AI will bring a lot of benefits to the world. But people are exploiting the fear about the technology, and we’re running the risk of scaring people away from it. That's a mistake we made with other technologies that revolutionized the world. Take the invention of the printing press in the 15th century. The Catholic Church hated it, right? People were going to be able to read the Bible themselves and not talk to the priest. Pretty much all the establishment was against the wide use of the printing press because it would change the power structure. They were right—it created 200 years of religious conflict. But it also brought about the Enlightenment. [Note: Historians might point out that the Church actually made use of the printing press for its own purposes, but whatever.]

Why are so many prominent people in tech sounding the alarm on AI?

Some people are seeking attention, other people are naive about what's really going on today. They don't realize that AI actually mitigates dangers like hate speech, misinformation, propagandist attempts to corrupt the electoral system. At Meta we’ve had enormous progress using AI for things like that. Five years ago, of all the hate speech that Facebook removed from the platform, about 20 to 25 percent was taken down preemptively by AI systems before anybody saw it. Last year, it was 95 percent.

How do you view chatbots? Are they powerful enough to displace human jobs?

They’re amazing. Big progress. They’re going to democratize creativity to some extent. They can produce very fluent text with very good style. But they’re boring, and what they come up with can be completely false.

“AI will bring a lot of benefits to the world. But we’re running the risk of scaring people away from it.”

The company you work for seems pretty hell bent on developing them and putting them into products.

There's a long-term future in which absolutely all of our interactions with the digital world—and, to some extent, with each other—will be mediated by AI systems. We have to experiment with things that are not powerful enough to do this right now, but are on the way to that. Like chatbots that you can talk to on WhatsApp. Or that help you in your daily life and help you create stuff, whether it's text or translation in real time, things like that. Or in the metaverse possibly.

How involved is Mark Zuckerberg in Meta’s AI push?

Mark is very much involved. I had a discussion with Mark early in the year and told him what I just told you, that there is a future in which all our interactions will be mediated by AI. ChatGPT showed us that AI could be useful for new products sooner than we anticipated. We saw that the public was much more captivated by the capabilities than we thought they would be. So Mark made the decision to create a product division focused on generative AI.

Why did Meta decide that Llama code would be shared with others, open source style?

When you have an open platform that a lot of people can contribute to, progress becomes faster. The systems you end up with are more secure and perform better. Imagine a future in which all of our interactions with the digital world are mediated by an AI system. You do not want that AI system to be controlled by a small number of companies on the West Coast of the US. Maybe the Americans won't care, maybe the American government won't care. But I tell you right now, in Europe, they won't like it. They say, “OK, well, this speaks English correctly. But what about French? What about German? What about Hungarian? Or Dutch or whatever? What did you train it on? How does that reflect our culture?”

Seems like a good way to get startups to use your product and kneecap your competitors.

We don’t need to kneecap anyone. This is the way the world is going to go. AI has to be open source because we need a common infrastructure when a platform is becoming an essential part of the fabric of communication.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,805
Reputation
7,926
Daps
148,708
{continued}

230922_Wired_Yann20206.jpg

In the future, LeCun predicts, “absolutely all of our interactions with the digital world—and, to some extent, with each other—will be mediated by AI.”
PHOTOGRAPH: ERIK TANNER

One company that disagrees with that is OpenAI, which you don’t seem to be a fan of.

When they started, they imagined creating a nonprofit to do AI research as a counterweight to bad guys like Google and Meta who were dominating the industry research. I said that's just wrong. And in fact, I was proved correct. OpenAI is no longer open. Meta has always been open and still is. The second thing I said is that you'll have a hard time developing substantial AI research unless you have a way to fund it. Eventually, they had to create a for-profit arm and get investment from Microsoft. So now they are basically your contract research house for Microsoft, though they have some independence. And then there was a third thing, which was their belief that AGI [artificial general intelligence] is just around the corner, and they were going to be the one developing it before anyone. They just won't.

How do you view the drama at OpenAI, when Sam Altman was booted as CEO and then returned to report to a different board? Do you think it had an impact on the research community or the industry?

I think the research world doesn't care too much about OpenAI anymore, because they're not publishing and they're not revealing what they're doing. Some former colleagues and students of mine work at OpenAI; we felt bad for them because of the instabilities that took place there. Research really thrives on stability, and when you have dramatic events like this, it makes people hesitate. Also, the other aspect important for people in research is openness, and OpenAI really isn't open anymore. So OpenAI has changed in the sense that they are not seen much as a contributor to the research community. That is in the hands of open platforms.

The shuffle at OpenAI has been called kind of a victory for AI “accelerationism,” which is the opposite of doomer-ism. I know you’re not a doomer, but are you an accelerationist?

No, I don't like those labels. I don't belong to any of those schools of thought or, in some cases, cults. I'm extremely careful not to push ideas of this type to the extreme, because you too easily get into purity cycles that lead you to do stupid things.

The EU recently issued a set of AI regulations, and one thing they did was largely exempt open source models. What will be the impact of that on Meta and others?

It affects Meta to some extent, but we have enough muscle to be compliant with whatever regulation is there. It’s much more important for countries that don't have their own resources to build AI systems from scratch. They can rely on open source platforms to have AI systems that cater to their culture, their language, their interests. There's going to be a future, probably not so far away, where the vast majority, if not all, of our interactions with the digital world will be mediated by AI systems. You don't want those things to be under the control of a small number of companies in California.


Were you involved in helping the regulators reach that conclusion?

I was, but not directly with the regulators. I've been talking with various governments, particularly the French government, but indirectly to others as well. And basically, they got that message that you don't want the digital diet of your citizens to be controlled by a small number of people. The French government bought that message pretty early on. Unfortunately, I didn’t speak to people at the EU level, who were more influenced by prophecies of doom and wanted to regulate everything to prevent what they thought were possible catastrophe scenarios. But that was blocked by the French, German, and Italian governments, who said you have to make a special provision for open source platforms.

But isn’t an open source AI really difficult to control—and to regulate?

No. For products where safety is really important, regulations already exist. Like if you're going to use AI to design your new drug, there’s already regulation to make sure that this product is safe. I think that makes sense. The question that people are debating is whether it makes sense to regulate research and development of AI. And I don't think it does.

“You don't want the digital diet of your citizens to be controlled by a small number of people.”

Couldn’t someone take a sophisticated open source system that a big company releases, and use it to take over the world? With access to source codes and weights, terrorists or scammers can give AI systems destructive drives.

They would need access to 2,000 GPUs somewhere that nobody can detect, enough money to fund it, and enough talent to actually do the job.

Some countries have a lot of access to those kinds of resources.

Actually, not even China does, because there’s an embargo.

I think they could eventually figure out how to make their own AI chips.

That's true. But it'd be some years behind the state of the art. It's the history of the world: Whenever technology progresses, you can't stop the bad guys from having access to it. Then it’s my good AI against your bad AI. The way to stay ahead is to progress faster. The way to progress faster is to open the research, so the larger community contributes to it.

How do you define AGI?

I don't like the term AGI because there is no such thing as general intelligence. Intelligence is not a linear thing that you can measure. Different types of intelligent entities have different sets of skills.

230922_Wired_Yann20092.jpg

LeCun was recently awarded the Chevalier de la Légion d'honneur by the president of France.
PHOTOGRAPH: ERIK TANNER


Once we get computers to match human-level intelligence, they won’t stop there. With deep knowledge, machine-level mathematical abilities, and better algorithms, they’ll create superintelligence, right?

Yeah, there's no question that machines will eventually be smarter than humans. We don't know how long it's going to take—it could be years, it could be centuries.

At that point, do we have to batten down the hatches?

No, no. We’ll all have AI assistants, and it will be like working with a staff of super smart people. They just won't be people. Humans feel threatened by this, but I think we should feel excited. The thing that excites me the most is working with people who are smarter than me, because it amplifies your own abilities.

But if computers get superintelligent, why would they need us?

There is no reason to believe that just because AI systems are intelligent they will want to dominate us. People are mistaken when they imagine that AI systems will have the same motivations as humans. They just won’t. We'll design them not to.

What if humans don’t build in those drives, and superintelligence systems wind up hurting humans by single-mindedly pursuing a goal? Like philosopher Nick Bostrom’s example of a system designed to make paper clips no matter what, and it takes over the world to make more of them.

You would be extremely stupid to build a system and not build any guardrails. That would be like building a car with a 1,000-horsepower engine and no brakes. Putting drives into AI systems is the only way to make them controllable and safe. I call this objective-driven AI. This is sort of a new architecture, and we don't have any demonstration of it at the moment.

“There is no reason to believe that just because AI systems are intelligent they will want to dominate us.”

That’s what you’re working on now?

Yes. The idea is that the machine has objectives that it needs to satisfy, and it cannot produce anything that does not satisfy those objectives. Those objectives might include guardrails to prevent dangerous things or whatever. That's how you make an AI system safe.

Do you think you're going to live to regret the consequences of the AI you helped bring about?

If I thought that was the case, I would stop doing what I'm doing.

You're a big jazz fan. Could anything generated by AI match the elite, euphoric creativity that so far only humans can produce? Can it produce work that has soul?

The answer is complicated. Yes, in the sense that AI systems eventually will produce music—or visual art, or whatever—with a technical quality similar to what humans can do, perhaps superior. But an AI system doesn’t have the essence of improvised music, which relies on communication of mood and emotion from a human. At least not yet. That’s why jazz music is to be listened to live.


You didn’t answer me whether that music would have soul.

You already have music that's completely soulless. It’s played in restaurants as background music. They’re products, produced mostly by machines. And there is a market for that.

But I'm talking about the pinnacle of art. If I played you a recording that topped Charlie Parker at his best, and then told you an AI generated it, would you feel cheated?

Yes and no. So yes, because music is not just an auditory experience—a lot of it is cultural. It’s admiration for the performer. Your example would be like Milli Vanilli. Truthfulness is an essential part of the artistic experience.

If AI systems were good enough to match elite artistic achievements and you didn’t know the backstory, the market would be flooded with Charlie Parker–level music, and we wouldn’t be able to tell the difference.

I don't see any problem with that. I’d take the original for the same reason I would still buy a $300 handmade bowl that comes from a culture of hundreds of years, even though I can buy something that looks pretty much the same for 5 bucks. We still go to listen to my favorite jazz musicians live, even though they can be emulated. An AI system is not the same experience.
[/SIZE]
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,805
Reputation
7,926
Daps
148,708

You can create your own AI songs with this new Copilot extension​


A new plug-in for the AI music creation tool Suno can generate custom songs in mere seconds.

By Amrita Khalid, one of the authors of audio industry newsletter Hot Pod. Khalid has covered tech, surveillance policy, consumer gadgets, and online communities for more than a decade.

Dec 19, 2023, 4:20 PM EST|1 Comment / 1 New



STK459_Music_Headphones.jpg


The Verge

Microsoft’s AI chatbot Copilot will now be able to churn out AI songs on demand — thanks to a new plug-in with Suno. The Cambridge-based AI music startup offers a tool on Discord that can compose an original song — complete with lyrics — based on a text prompt. Now, Copilot users will be able to access Suno using the Microsoft chatbot.

In order to start making music, Copilot users only need to sign on to their accounts and enable the Suno plug-in — or click on the logo that says, “Make music with Suno.” Users then need to think of a simple one- or two-line text prompt that describes their desired song, such as “create a folk song about Alaska summers” or “write a song about cats in the style of Cat Power” (both prompts I tried personally on Suno via Discord) and type it into Copilot.

Feline Harmony

Suno will then generate an original song — normally a minute or two in length — complete with a transcript of the lyrics.


Screen_Shot_2023_12_19_at_11.45.26_AM.png


The Suno plug-in on Microsoft Copilot.


Microsoft

Other Big Tech companies are also testing generative AI music tools, like Meta’s open-source AudioCraft, which also generates songs based on text prompts, and Google’s YouTube tool, which spits out original music tracks based on a hummed song or a text prompt. In addition to Suno, a number of AI music startups promising similar tools have entered the scene, like Soundful, Magenta, Beatbox (which can generate beats and instrumental tracks), Soundraw, Loudly, Boomy, Beatoven.ai, and many others.

Suno forbids its free users from, say, monetizing generated AI songs on YouTube or Spotify, but it gives paid users commercial rights to their songs. However, aspiring ghostwriters should keep in mind that Suno owns the rights to any songs generated by free users — though sharing on social platforms or other non-commercial uses is allowed.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,805
Reputation
7,926
Daps
148,708


ABSTRACT:
This paper introduces PowerInfer, a high-speed Large Language Model (LLM) inference engine on a personal computer (PC) equipped with a single consumer-grade GPU. The key underlying the design of PowerInfer is exploiting the high locality inherent in LLM inference, characterized by a power- law distribution in neuron activation. This distribution indicates that a small subset of neurons, termed hot neurons, are consistently activated across inputs, while the majority, cold neurons, vary based on specific inputs. PowerInfer exploits such an insight to design a GPU-CPU hybrid inference engine: hot-activated neurons are preloaded onto the GPU for fast access, while cold-activated neurons are computed on the CPU, thus significantly reducing GPU memory demands and CPU-GPU data transfers. PowerInfer further integrates adaptive predictors and neuron-aware sparse operators, optimizing the efficiency of neuron activation and computational sparsity. Evaluation shows that PowerInfer attains an average token generation rate of 13.20 tokens/s, with a peak of 29.08 tokens/s, across various LLMs (including OPT-175B) on a single NVIDIA RTX 4090 GPU, only 18% lower than that achieved by a top-tier server-grade A100 GPU. This significantly outperforms llama.cpp by up to 11.69× while retaining model accuracy.

 

AVXL

Laughing at you n*ggaz like “ha ha ha”
Joined
May 1, 2012
Messages
40,510
Reputation
635
Daps
75,652
Reppin
Of course the ATL
seems like LLMs have reached their plateau. from what I've seen Gemini isn't that much more powerful than ChatGPT4. Google has had ample time and compute to make something far more powerful than ChatGPT4 and the results, while better than ChatGPT aren't ALL that impressive. There is already talk of Google misrepresenting and being deceitful about Gemini's capability.

Exactly. Altman has already talked about the size of LLMs not mattering as much moving forward, I think what Google is doing with Gemini Nano and other smaller, more powerful LLMs that are tuned to your personal devices is waaay more interesting in the long term. The consumer implications on mobile are bananas
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,805
Reputation
7,926
Daps
148,708



Computer Science > Artificial Intelligence​

[Submitted on 8 Nov 2023]

ADaPT: As-Needed Decomposition and Planning with Language Models​

Archiki Prasad, Alexander Koller, Mareike Hartmann, Peter Clark, Ashish Sabharwal, Mohit Bansal, Tushar Khot
Large Language Models (LLMs) are increasingly being used for interactive decision-making tasks requiring planning and adapting to the environment. Recent works employ LLMs-as-agents in broadly two ways: iteratively determining the next action (iterative executors) or generating plans and executing sub-tasks using LLMs (plan-and-execute). However, these methods struggle with task complexity, as the inability to execute any sub-task may lead to task failure. To address these shortcomings, we introduce As-Needed Decomposition and Planning for complex Tasks (ADaPT), an approach that explicitly plans and decomposes complex sub-tasks as-needed, i.e., when the LLM is unable to execute them. ADaPT recursively decomposes sub-tasks to adapt to both task complexity and LLM capability. Our results demonstrate that ADaPT substantially outperforms established strong baselines, achieving success rates up to 28.3% higher in ALFWorld, 27% in WebShop, and 33% in TextCraft -- a novel compositional dataset that we introduce. Through extensive analysis, we illustrate the importance of multilevel decomposition and establish that ADaPT dynamically adjusts to the capabilities of the executor LLM as well as to task complexity.
Comments:Project Page: this https URL
Subjects:Artificial Intelligence (cs.AI); Computation and Language (cs.CL); Machine Learning (cs.LG)
Cite as:arXiv:2311.05772 [cs.AI]
(or arXiv:2311.05772v1 [cs.AI] for this version)
[2311.05772] ADaPT: As-Needed Decomposition and Planning with Language Models
Focus to learn more

Submission history​

From: Archiki Prasad [view email]
[v1] Wed, 8 Nov 2023 17:59:15 UTC (8,543 KB)

 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,805
Reputation
7,926
Daps
148,708








Adapting Large Language Models via Reading Comprehension​

This repo contains the model, code and data for our paper Adapting Large Language Models via Reading Comprehension



We explore continued pre-training on domain-specific corpora for large language models. While this approach enriches LLMs with domain knowledge, it significantly hurts their prompting ability for question answering. Inspired by human learning via reading comprehension, we propose a simple method to transform large-scale pre-training corpora into reading comprehension texts, consistently improving prompting performance across tasks in biomedicine, finance, and law domains. Our 7B model competes with much larger domain-specific models like BloombergGPT-50B. Moreover, our domain-specific reading comprehension texts enhance model performance even on general benchmarks, indicating potential for developing a general LLM across more domains.



Domain-specific LLMs​

Our models of different domains are now available in Huggingface: biomedicine-LLM, finance-LLM and law-LLM, the performances of our AdaptLLM compared to other domain-specific LLMs are:






Domain-specific Tasks​

To easily reproduce our results, we have uploaded the filled-in zero/few-shot input instructions and output completions of each domain-specific task: biomedicine-tasks, finance-tasks, and law-tasks.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
51,805
Reputation
7,926
Daps
148,708




Computer Science > Artificial Intelligence​

[Submitted on 22 Dec 2023]

TACO: Topics in Algorithmic COde generation dataset​

Rongao Li (1 and 2), Jie Fu (1), Bo-Wen Zhang (1), Tao Huang (2), Zhihong Sun (2), Chen Lyu (2), Guang Liu (1), Zhi Jin (3), Ge Li (3) ((1) Beijing Academy of Artificial Intelligence, (2) School of Information Science and Engineering, Shandong Normal University, China, (3) Key Lab of HCST (PKU), MOE, SCS, Peking University, China)
We introduce TACO, an open-source, large-scale code generation dataset, with a focus on the optics of algorithms, designed to provide a more challenging training dataset and evaluation benchmark in the field of code generation models. TACO includes competition-level programming questions that are more challenging, to enhance or evaluate problem understanding and reasoning abilities in real-world programming scenarios. There are 25433 and 1000 coding problems in training and test set, as well as up to 1.55 million diverse solution answers. Moreover, each TACO problem includes several fine-grained labels such as task topics, algorithms, programming skills, and difficulty levels, providing a more precise reference for the training and evaluation of code generation models. The dataset and evaluation scripts are available on Hugging Face Hub (this https URL) and Github (this https URL).
Subjects:Artificial Intelligence (cs.AI)
Cite as:arXiv:2312.14852 [cs.AI]
(or arXiv:2312.14852v1 [cs.AI] for this version)

Submission history​

From: Bo-Wen Zhang [view email]
[v1] Fri, 22 Dec 2023 17:25:42 UTC (600 KB)




 
Top