mirror of
https://github.com/dair-ai/Prompt-Engineering-Guide
synced 2024-11-02 15:40:13 +00:00
12 lines
1.1 KiB
Plaintext
12 lines
1.1 KiB
Plaintext
# Active-Prompt
|
|
|
|
import { Callout, FileTree } from 'nextra-theme-docs'
|
|
import {Screenshot} from 'components/screenshot'
|
|
import ACTIVE from '../../img/active-prompt.png'
|
|
|
|
Chain-of-thought (CoT) methods rely on a fixed set of human-annotated exemplars. The problem with this is that the exemplars might not be the most effective examples for the different tasks. To address this, [Diao et al., (2023)](https://arxiv.org/pdf/2302.12246.pdf) recently proposed a new prompting approach called Active-Prompt to adapt LLMs to different task-specific example prompts (annotated with human-designed CoT reasoning).
|
|
|
|
Below is an illustration of the approach. The first step is to query the LLM with or without a few CoT examples. *k* possible answers are generated for a set of training questions. An uncertainty metric is calculated based on the *k* answers (disagreement used). The most uncertain questions are selected for annotation by humans. The new annotated exemplars are then used to infer each question.
|
|
|
|
<Screenshot src={ACTIVE} alt="ACTIVE" />
|
|
Image Source: [Diao et al., (2023)](https://arxiv.org/pdf/2302.12246.pdf) |