Few-shot learning is a prompting technique where a small number of problem-solution examples are processed by Large Language Models before they are given a problem. LLMs can exhibit an emergent property where few-shot learning enables the model to perform significantly better than they otherwise might.
Zero-shot prompting is used to signify no examples are provided to a given problem.
Review: This movie sucks. Sentiment: negative. Review: I love this movie. Sentiment:
A basic example from Wikipedia
Chain-of-thought prompting uses this technique extensively.
GPT-3 was the first LLM to exhibit few-shot learning and chain-of-thought reasoning.