diff --git a/pages/techniques/ape.en.mdx b/pages/techniques/ape.en.mdx index 1a50cbb..dfe144f 100644 --- a/pages/techniques/ape.en.mdx +++ b/pages/techniques/ape.en.mdx @@ -12,7 +12,7 @@ Image Source: [Zhou et al., (2022)](https://arxiv.org/abs/2211.01910) The first step involves a large language model (as an inference model) that is given output demonstrations to generate instruction candidates for a task. These candidate solutions will guide the search procedure. The instructions are executed using a target model, and then the most appropriate instruction is selected based on computed evaluation scores. -APE discovers a better zero-shot CoT prompt than the human engineered "Let's think step by step" prompt (Kojima et al., 2022). +APE discovers a better zero-shot CoT prompt than the human engineered "Let's think step by step" prompt ([Kojima et al., 2022](https://arxiv.org/abs/2205.11916)). The prompt "Let's work this out it a step by step to be sure we have the right answer." elicits chain-of-though reasoning and improves performance on the MultiArith and GSM8K benchmarks: diff --git a/pages/techniques/ape.jp.mdx b/pages/techniques/ape.jp.mdx index dfafa9c..27b6cde 100644 --- a/pages/techniques/ape.jp.mdx +++ b/pages/techniques/ape.jp.mdx @@ -12,7 +12,7 @@ import APECOT from '../../img/ape-zero-shot-cot.png' 最初のステップは、タスクのための指示候補を生成する推論モデルとしての大規模言語モデルを使用することです。これらの候補解は、検索手順を指導します。指示はターゲットモデルを使用して実行され、計算された評価スコアに基づいて最適な指示が選択されます。 -APEは、人間が設計した「一緒にステップバイステップで考えてみましょう」というプロンプトよりも優れたゼロショットCoTプロンプトを発見しました(Kojima et al.、2022)。 +APEは、人間が設計した「一緒にステップバイステップで考えてみましょう」というプロンプトよりも優れたゼロショットCoTプロンプトを発見しました([Kojima et al.、2022](https://arxiv.org/abs/2205.11916))。 「一緒にステップバイステップで作業し、正しい答えを確認するために」のプロンプトは、思考の連鎖を引き起こし、MultiArithおよびGSM8Kベンチマークのパフォーマンスを向上させます。 diff --git a/pages/techniques/ape.pt.mdx b/pages/techniques/ape.pt.mdx index fa9c1c1..2c75249 100644 --- a/pages/techniques/ape.pt.mdx +++ b/pages/techniques/ape.pt.mdx @@ -12,7 +12,7 @@ Fonte da imagem: [Zhou et al., (2022)](https://arxiv.org/abs/2211.01910) A primeira etapa envolve um grande modelo de linguagem (como um modelo de inferência) que recebe demonstrações de saída para gerar candidatos de instrução para uma tarefa. Essas soluções candidatas guiarão o procedimento de busca. As instruções são executadas usando um modelo de destino e, em seguida, a instrução mais apropriada é selecionada com base nas pontuações de avaliação computadas. -O APE descobre um prompt de CoT zero-shot melhor do que o prompt "Vamos pensar passo a passo" projetado por humanos (Kojima et al., 2022). +O APE descobre um prompt de CoT zero-shot melhor do que o prompt "Vamos pensar passo a passo" projetado por humanos ([Kojima et al., 2022](https://arxiv.org/abs/2205.11916)). O prompt "Vamos resolver isso passo a passo para ter certeza de que temos a resposta certa." provoca raciocínio em cadeia e melhora o desempenho nos benchmarks MultiArith e GSM8K: diff --git a/pages/techniques/ape.zh.mdx b/pages/techniques/ape.zh.mdx index c5ad2da..89b400c 100644 --- a/pages/techniques/ape.zh.mdx +++ b/pages/techniques/ape.zh.mdx @@ -12,7 +12,7 @@ import APECOT from '../../img/ape-zero-shot-cot.png' 第一步涉及一个大型语言模型(作为推理模型),该模型接收输出演示以生成任务的指令候选项。这些候选解将指导搜索过程。使用目标模型执行指令,然后根据计算的评估分数选择最合适的指令。 -APE发现了一个比人工设计的“让我们一步一步地思考”提示更好的零样本CoT提示(Kojima等人,2022)。 +APE发现了一个比人工设计的“让我们一步一步地思考”提示更好的零样本CoT提示([Kojima等人,2022](https://arxiv.org/abs/2205.11916))。 提示“让我们一步一步地解决这个问题,以确保我们有正确的答案。”引发了思维链的推理,并提高了MultiArith和GSM8K基准测试的性能: