Elvis Saravia 2 weeks ago
parent fe056a6212
commit 52e1551bee

@ -2,6 +2,12 @@
import { Callout } from 'nextra/components'
<iframe width="100%"
height="415px"
src="https://www.youtube.com/embed/ojtbHUqw1LA?si=DPHurHTzZXm22vcN" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture"
allowFullScreen
/>
While large-language models demonstrate remarkable zero-shot capabilities, they still fall short on more complex tasks when using the zero-shot setting. Few-shot prompting can be used as a technique to enable in-context learning where we provide demonstrations in the prompt to steer the model to better performance. The demonstrations serve as conditioning for subsequent examples where we would like the model to generate a response.
According to [Touvron et al. 2023](https://arxiv.org/pdf/2302.13971.pdf) few shot properties first appeared when models were scaled to a sufficient size [(Kaplan et al., 2020)](https://arxiv.org/abs/2001.08361).

Loading…
Cancel
Save