1/7
Chain-of-Thought (CoT) prompting --> OUT(?), analogical prompting --> IN!
A new paper from @GoogleDeepMind & @Stanford (accepted to @iclr_conf): "Large Language Models as Analogical Reasoners"
2/7
CoT prompting has shown LLMsโ abilities to tackle complex tasks, such as solving math problems, by prompting them to generate intermediate reasoning steps. However, they typically demand labeled exemplars of the reasoning process, which can be costly to obtain for every task!
3/7
In this paper, they propose "analogical prompting", a new prompting approach that automatically guides the reasoning process of LLMs!
Their inspiration comes from analogical reasoning in psychology, a concept where humans draw from relevant experiences to tackle new problems.
4/7
They use exactly this idea to prompt LLMs to self-generate relevant exemplars or knowledge in the context, before proceeding to solve the original problem (see figure in main tweet)
5/7
๐๐๐ฏ๐๐ง๐ญ๐๐ ๐๐ฌ:
It eliminates the need for labeling or retrieving examples, offering generality and convenience.
It adapts the examples and knowledge to each problem, offering adaptability.
6/7
๐๐๐ฌ๐ฎ๐ฅ๐ญ๐ฌ:
๐๐ง๐๐ฅ๐จ๐ ๐ข๐๐๐ฅ ๐ฉ๐ซ๐จ๐ฆ๐ฉ๐ญ๐ข๐ง๐ surpasses both 0-shot and manually tuned few-shot CoT across several reasoning tasks like math problem solving (GSM8K, MATH), code generation (Codeforces), and various reasoning challenges in BIG-Bench!
7/7
Authors:
@jure
@percyliang
@denny_zhou
@edchi
Chain-of-Thought (CoT) prompting --> OUT(?), analogical prompting --> IN!
A new paper from @GoogleDeepMind & @Stanford (accepted to @iclr_conf): "Large Language Models as Analogical Reasoners"
2/7
CoT prompting has shown LLMsโ abilities to tackle complex tasks, such as solving math problems, by prompting them to generate intermediate reasoning steps. However, they typically demand labeled exemplars of the reasoning process, which can be costly to obtain for every task!
3/7
In this paper, they propose "analogical prompting", a new prompting approach that automatically guides the reasoning process of LLMs!
Their inspiration comes from analogical reasoning in psychology, a concept where humans draw from relevant experiences to tackle new problems.
4/7
They use exactly this idea to prompt LLMs to self-generate relevant exemplars or knowledge in the context, before proceeding to solve the original problem (see figure in main tweet)
5/7
๐๐๐ฏ๐๐ง๐ญ๐๐ ๐๐ฌ:
It eliminates the need for labeling or retrieving examples, offering generality and convenience.
It adapts the examples and knowledge to each problem, offering adaptability.
6/7
๐๐๐ฌ๐ฎ๐ฅ๐ญ๐ฌ:
๐๐ง๐๐ฅ๐จ๐ ๐ข๐๐๐ฅ ๐ฉ๐ซ๐จ๐ฆ๐ฉ๐ญ๐ข๐ง๐ surpasses both 0-shot and manually tuned few-shot CoT across several reasoning tasks like math problem solving (GSM8K, MATH), code generation (Codeforces), and various reasoning challenges in BIG-Bench!
7/7
Authors:
@jure
@percyliang
@denny_zhou
@edchi