Merge branch 'main' of https://github.com/dair-ai/Prompt-Engineering-Guide into translation_ca

pull/170/head
Pedro Mosquera 1 year ago
commit 525fe7da4e

@ -1,3 +1,195 @@
# Generating Code
# コード生成
Needs translation! Feel free to contribute a translating by clicking the `Edit this page` button on the right side.
import { Callout, FileTree } from 'nextra-theme-docs'
import {Screenshot} from 'components/screenshot'
import CODE1 from '../../img/code-generation/chat-mode.png'
<Callout emoji="⚠️">
このセクションの内容は、鋭意開発進行中です。
</Callout>
ChatGPTのような大規模言語モデルは、コード生成に非常に効果的です。このセクションでは、コード生成でChatGPTをどのように利用するかについて、いくつかの例をご紹介します。
全ての例ではOpenAI Playground (Chat Mode)と`gpt-3.5-turbo`モデルを利用します。
他のすべてのOpenAIのチャットモデルと同様に、応答の動作とフォーマットを定義するために`System Message`を利用することができます。ここでは、プロンプトの例として以下のシステムメッセージを使用します。
```
あなたは新人開発者にコードの書き方を教えることができる便利なコードアシスタントです。言語はPythonを利用してください。コードの説明はせずに、コードブロックだけを生成してください。
```
## 基本の例
`User Message`はモデルにコード生成を指示するプロンプトです。以下はコード生成の最初のプロンプトのスクリーンショットです。`Assistant`によってコードが生成されています。
<Screenshot src={CODE1} alt="CODE1" />
自分で試したい場合は、以下のプロンプトを利用してください。
```
ユーザーに名前を尋ねて"Hello"と挨拶をするコードを書いてください。
```
## コメントをコードに変換
コメントからコードを生成するために、大規模言語モデルのコード生成機能を利用することもできます。以下のコメントブロックの指示の例を見てみましょう。
*プロンプト:*
```
"""
1. 映画の一覧を生成する
2. これらの映画の評価の一覧を生成する
3. これらを組み合わさて、10件の映画と評価からなるJSONオブジェクトを生成する。
"""
```
*出力:*
```python
movies = ["The Shawshank Redemption", "The Godfather", "The Dark Knight", "Forrest Gump", "Inception",
"The Lord of the Rings: The Fellowship of the Ring", "The Matrix", "Goodfellas", "Pulp Fiction", "Fight Club"]
ratings = [9.3, 9.2, 9.0, 8.8, 8.7, 8.8, 8.7, 8.7, 8.9, 8.8]
movie_ratings = []
for i in range(10):
movie_rating = {"movie": movies[i], "rating": ratings[i]}
movie_ratings.append(movie_rating)
import json
print(json.dumps(movie_ratings))
```
生成されたコードは動作するように見えます。しかし、`import json`のような重要な`import`文が含まれていません。これらのモデルは動作するコードを生成しますが、このようなコードの詳細部分に細心の注意を払い、常にコードをテストして動作確認をする必要があります。
## 関数の補完や次の行
これらの大規模言語モデルはGitHub Copilotのような開発者にとって便利なツールにも取り入れられています。便利な機能の一つとして関数の補完があります。
*プロンプト:*
```python
# 2つの数字を掛けて合わせて、それに75を足す関数
def multiply(
```
*出力:*
```python
a, b):
result = a * b
result += 75
return result
```
## MySQLクエリの生成
上記の基本的なコード生成の例のほかにも、MySQLクエリの生成やテストなど、プログラミングの他の場面で使用されるコードを生成するためにモデルを使用することもできます。
例えば、いくつかの情報のデータセットをプロンプトの一部に含めて、モデルに特定のクエリを生成するように指示をします。例えば、以下の通りです:
*プロンプト:*
```
Table departments, columns = [DepartmentId, DepartmentName]
Table students, columns = [DepartmentId, StudentId, StudentName]
Computer Scienceの全生徒を対象としたMySQLクエリを生成してください。
```
*出力:*
```
SELECT students.StudentId, students.StudentName
FROM students
INNER JOIN departments
ON students.DepartmentId = departments.DepartmentId
WHERE departments.DepartmentName = 'Computer Science';
```
モデルがMySQLクエリを生成しましたしかし、この例では生成されたクエリをテストするデータベースが無いので、そのためのデータベースを作成します。もしこの例を試すのであれば、このデータベースを作成しておくと便利でしょう。幸運にも、データベースを生成することにもChatGPTを利用できます。アプローチの一つはデータベースのスキーマを生成するように上記のプロンプトの指示を少し修正します:
```
Table departments, columns = [DepartmentId, DepartmentName]
Table students, columns = [DepartmentId, StudentId, StudentName]
上記のテーブルとカラムを持つ有効なデータベーススキーマを生成してください。
```
以下のスキーマを生成します:
```
CREATE TABLE departments (
DepartmentId INT PRIMARY KEY,
DepartmentName VARCHAR(50)
);
CREATE TABLE students (
DepartmentId INT,
StudentId INT PRIMARY KEY,
StudentName VARCHAR(50),
FOREIGN KEY (DepartmentId) REFERENCES departments(DepartmentId)
);
```
[sqliteonline.com](https://sqliteonline.com/)のようなサービスを利用して一時的なデータベースを作成することができます。
生成されたクエリが有効だった場合に、動作をテストをするためのダミーデータも必要です。これにもChatGPTを使ってみましょうプロンプトは次のようになります:
```
CREATE TABLE departments (
DepartmentId INT PRIMARY KEY,
DepartmentName VARCHAR(50)
);
CREATE TABLE students (
DepartmentId INT,
StudentId INT PRIMARY KEY,
StudentName VARCHAR(50),
FOREIGN KEY (DepartmentId) REFERENCES departments(DepartmentId)
);
上記のデータベーススキーマで、各テーブルに4行を含む有効な挿入ステートメントを生成してください。
```
挿入ステートメントの出力:
```
INSERT INTO departments (DepartmentId, DepartmentName)
VALUES (1, 'Computer Science'), (2, 'Mathematics'), (3, 'Physics'), (4, 'Biology');
INSERT INTO students (DepartmentId, StudentId, StudentName)
VALUES (1, 101, 'John Doe'), (1, 102, 'Jane Doe'), (2, 201, 'Alice Smith'), (2, 202, 'Bob Johnson');
```
今やデーターベースとダミーデータを持っているので、クエリをテストすることができます。ダミーデーターに基づくと、結果は`Jane Doe`と`John Dou`の2行になるはずです。
## コードの説明
ある言語のプログラミングを学んでいたら、コードの特定の部分を説明するようにモデルに促すことが役に立つかもしれません。生成された上記のクエリを再利用してモデルにクエリの説明をしてもらいましょう。
もし先ほどと同じ同じSystem Messageを利用する場合は、モデルへの指示の仕方に注意してください。例えば、もし次の例を利用した時に、モデルは「Pythonのコードアシスタントである」と主張しますが、それでも説明をすることができます。モデルはSystem Messageに従うように設計されているので、あなたが指示した内容を生成することを拒否することがあります。
*プロンプト:*
```
SELECT students.StudentId, students.StudentName
FROM students
INNER JOIN departments
ON students.DepartmentId = departments.DepartmentId
WHERE departments.DepartmentName = 'Computer Science';
上記のSQLステートメントを説明してください。
```
*出力:*
```
すみません、私はPython用のコードアシスタントなので、SQLのコードを生成することができません。しかし、提供されたSQLステートメントの目的について説明することができます。
このSQL文は、departmentsテーブルのDepartmentName列が 'コンピュータサイエンス' に等しい場合に、studentsテーブルからStudentIdとStudentNameの列を選択します。このために、studentsテーブルとdepartmentsテーブルのDepartmentId列に対して内部結合を実行しています。
より簡単な言葉で言うと、コンピュータサイエンス学科に在籍している学生の名前とIDを選択しています。
```
## コードの編集
例は近日公開予定です!
## Debugging Code
例は近日公開予定です!
## Best practices
近日公開予定!

@ -34,7 +34,7 @@ Le ciel est
*Output:*
```
si belle aujourd'hui.
si beau aujourd'hui.
```
Est-ce mieux ? Eh bien, nous avons demandé au modèle de compléter la phrase, donc le résultat est beaucoup mieux car il suit exactement ce que nous lui avons demandé de faire ("compléter la phrase"). Cette approche de conception de prompts optimaux pour guider le modèle dans l'exécution d'une tâche est ce qu'on appelle le **prompt engineering**.

@ -1,8 +1,8 @@
# Examples of Prompts
In the previous section, we introduced and gave a basic examples of how to prompt LLMs.
The previous section introduced a basic example of how to prompt LLMs.
In this section, we will provide more examples of how prompts are used to achieve different tasks and introduce key concepts along the way. Often, the best way to learn concepts is by going through examples. Below we cover a few examples of how well-crafted prompts can be used to perform different types of tasks.
This section will provide more examples of how to use prompts to achieve different tasks and introduce key concepts along the way. Often, the best way to learn concepts is by going through examples. The few examples below illustrate how you can use well-crafted prompts to perform different types of tasks.
Topics:
- [Text Summarization](#text-summarization)
@ -18,7 +18,7 @@ Topics:
## Text Summarization
One of the standard tasks in natural language generation is text summarization. Text summarization can include many different flavors and domains. In fact, one of the most promising applications of language models is the ability to summarize articles and concepts into quick and easy-to-read summaries. Let's try a basic summarization task using prompts.
Let's say I am interested to learn about antibiotics, I could try a prompt like this:
Let's say you are interested to learn about antibiotics, you could try a prompt like this:
*Prompt:*
```
@ -32,7 +32,7 @@ A:
Antibiotics are a type of medication used to treat bacterial infections. They work by either killing the bacteria or preventing them from reproducing, allowing the bodys immune system to fight off the infection. Antibiotics are usually taken orally in the form of pills, capsules, or liquid solutions, or sometimes administered intravenously. They are not effective against viral infections, and using them inappropriately can lead to antibiotic resistance.
```
The "A:" is an explicit prompt format that's used in question answering. I used it here to tell the model that there is an expected further. In this example, it's not clear how this is useful vs not using it but we will leave it that for later examples. Let's just assume that this is too much information and want to summarize it further. In fact, we can instruct the model to summarize into one sentence like so:
The "A:" is an explicit prompt format that you use in question answering. You used it here to tell the model that there is an expected further. In this example, it's not clear how this is useful vs not using it but we will leave it that for later examples. Let's just assume that this is too much information and you want to summarize it further. In fact, you can instruct the model to summarize into one sentence like so:
*Prompt:*
```
@ -66,7 +66,7 @@ Mention the large language model based product mentioned in the paragraph above:
The large language model based product mentioned in the paragraph above is ChatGPT.
```
There are many ways we can improve the results above, but this is already very useful.
There are many ways you can improve the results above, but this is already very useful.
By now it should be obvious that you can ask the model to perform different tasks by simply instructing it what to do. That's a powerful capability that AI product developers are already using to build powerful products and experiences.
@ -99,7 +99,7 @@ Context obtained from [Nature](https://www.nature.com/articles/d41586-023-00400-
---
## Text Classification
So far, we have used simple instructions to perform a task. As a prompt engineer, you will need to get better at providing better instructions. But that's not all! You will also find that for harder use cases, just providing instructions won't be enough. This is where you need to think more about the context and the different elements you can use in a prompt. Other elements you can provide are `input data` or `examples`.
So far, you have used simple instructions to perform a task. As a prompt engineer, you need to get better at providing better instructions. But that's not all! You will also find that for harder use cases, just providing instructions won't be enough. This is where you need to think more about the context and the different elements you can use in a prompt. Other elements you can provide are `input data` or `examples`.
Let's try to demonstrate this by providing an example of text classification.
@ -116,7 +116,7 @@ Sentiment:
Neutral
```
We gave the instruction to classify the text and the model responded with `'Neutral'` which is correct. Nothing is wrong with this but let's say that what we really need is for the model to give the label in the exact format we want. So instead of `Neutral` we want it to return `neutral`. How do we achieve this? There are different ways to do this. We care about specificity here, so the more information we can provide the prompt the better results. We can try providing examples to specify the correct behavior. Let's try again:
You gave the instruction to classify the text and the model responded with `'Neutral'`, which is correct. Nothing is wrong with this but let's say that what you really need is for the model to give the label in the exact format you want. So instead of `Neutral`, you want it to return `neutral`. How do you achieve this? There are different ways to do this. You care about specificity here, so the more information you can provide the prompt, the better results. You can try providing examples to specify the correct behavior. Let's try again:
*Prompt:*
```
@ -134,7 +134,7 @@ Sentiment:
neutral
```
Perfect! This time the model returned `neutral` which is the specific label I was looking for. It seems that the example provided in the prompt helped the model to be specific in its output.
Perfect! This time the model returned `neutral` which is the specific label you were looking for. It seems that the example provided in the prompt helped the model to be specific in its output.
To highlight why sometimes being specific is important, check out the example below and spot the problem:
@ -151,14 +151,14 @@ Sentiment:
Neutral
```
What is the problem here? As a hint, the made up `nutral` label is completely ignored by the model. Instead, the model outputs `Neutral` as it has some bias towards that label. But let's assume that what we really want is `nutral`. How would you fix this? Maybe you can try adding descriptions to the labels or add more examples to the prompt? If you are not sure, we will discuss a few ideas in the upcoming sections.
What is the problem here? As a hint, the made-up `nutral` label is completely ignored by the model. Instead, the model outputs `Neutral` as it has some bias towards that label. But let's assume that what you really want is `nutral`. How would you fix this? Maybe you can try adding descriptions to the labels or add more examples to the prompt? If you are not sure, we will discuss a few ideas in the upcoming sections.
---
## Conversation
Perhaps one of the more interesting things you can achieve with prompt engineering is instructing the LLM system on how to behave, its intent, and its identity. This is particularly useful when you are building conversational systems like customer service chatbots.
For instance, let's create a conversational system that's able to generate more technical and scientific responses to questions. Note how we are explicitly telling it how to behave through the instruction. This is sometimes referred to as *role prompting*.
For instance, let's create a conversational system that's able to generate more technical and scientific responses to questions. Note how you are explicitly telling it how to behave through the instruction. This is sometimes referred to as *role prompting*.
*Prompt:*
```
@ -192,7 +192,7 @@ AI:
Sure! Black holes are created when a very massive star runs out of fuel and collapses in on itself. The gravity of the star is so strong that nothing, not even light, can escape its pull. This creates a point in space-time with an intense gravitational force called a black hole.
```
I think we made some progress. You can continue improving it. I am sure if you add more examples you might get even better results.
You made some progress. You can continue improving it. If you add more examples, you might get even better results.
---
@ -214,9 +214,9 @@ let name = prompt("What is your name?");
console.log(`Hello, ${name}!`);
```
You can see that we didn't even need to specify the language to use.
Notice that you didn't even need to specify the language to use.
Let's switch levels a bit. I want to show you how powerful LLMs can be with a little more effort in designing the prompts.
Let's switch levels a bit. The example below shows how powerful LLMs can be with a little more effort in designing the prompts.
*Prompt:*
```
@ -234,12 +234,12 @@ FROM students
WHERE DepartmentId IN (SELECT DepartmentId FROM departments WHERE DepartmentName = 'Computer Science');
```
This is very impressive. In this case, we provided data about the database schema and asked it to generate a valid MySQL query.
This is very impressive. In this case, you provided data about the database schema and asked it to generate a valid MySQL query.
---
## Reasoning
Perhaps one of the most difficult tasks for an LLM today is one that requires some form of reasoning. Reasoning is one of the areas that I am most excited about due to the types of complex applications that can emerge from LLMs.
Perhaps one of the most difficult tasks for an LLM today is one that requires some form of reasoning. Reasoning is one of most interesting areas due to the types of complex applications that can emerge from LLMs.
There have been some improvements in tasks involving mathematical capabilities. That said, it's important to note that current LLMs struggle to perform reasoning tasks so this requires even more advanced prompt engineering techniques. We will cover these advanced techniques in the next guide. For now, we will cover a few basic examples to show arithmetic capabilities.

@ -1,100 +1,104 @@
# General Tips for Designing Prompts
# Conseils Généraux Pour La Conception Des Prompts
Here are some tips to keep in mind while you are designing your prompts:
Voici quelques conseils à garder à l'esprit lorsque vous concevez vos prompts :
### Start Simple
As you get started with designing prompts, you should keep in mind that it is really an iterative process that requires a lot of experimentation to get optimal results. Using a simple playground like OpenAI or Cohere's is a good starting point.
### Commencez Simplement
You can start with simple prompts and keep adding more elements and context as you aim for better results. Versioning your prompt along the way is vital for this reason. As we read the guide you will see many examples where specificity, simplicity, and conciseness will often give you better results.
Lorsque vous commencez à concevoir des prompts, vous devez garder à l'esprit qu'il s'agit d'un processus itératif qui nécessite beaucoup d'expérimentation afin d'obtenir des résultats optimaux. L'utilisation d'un terrain de jeu simple comme OpenAI ou Cohere est un bon point de départ.
When you have a big task that involves many different subtasks, you can try to break down the task into simpler subtasks and keep building up as you get better results. This avoids adding too much complexity to the prompt design process at the beginning.
Vous pouvez commencer par des messages simples et ajouter de plus en plus d'éléments et de contexte au fur et à mesure que vous cherchez à obtenir de meilleurs résultats. Pour cette raison, il est essentiel de modifier votre prompt en cours de route. En lisant le guide, vous verrez de nombreux exemples où la spécificité, la simplicité et la concision permettent souvent d'obtenir de meilleurs résultats.
### The Instruction
You can design effective prompts for various simple tasks by using commands to instruct the model what you want to achieve such as "Write", "Classify", "Summarize", "Translate", "Order", etc.
Lorsque vous avez une tâche importante qui implique de nombreuses sous-tâches différentes, vous pouvez essayer de décomposer la tâche en sous-tâches plus simples et continuer à les développer au fur et à mesure que vous obtenez de meilleurs résultats. Cela permet d'éviter d'ajouter trop de complexité au processus de prompt design dès le début.
Keep in mind that you also need to experiment a lot to see what works best. Try different instructions with different keywords, contexts, and data and see what works best for your particular use case and task. Usually, the more specific and relevant the context is to the task you are trying to perform, the better. We will touch on the importance of sampling and adding more context in the upcoming guides.
### L'instruction
Others recommend that instructions are placed at the beginning of the prompt. It's also recommended that some clear separator like "###" is used to separate the instruction and context.
Vous pouvez concevoir des prompts efficaces pour diverses tâches simples en utilisant des commandes pour indiquer au modèle ce que vous souhaitez réaliser, telles que "Écrire", "Classer", "Résumer", "Traduire", "Ordonner", etc.
For instance:
Gardez à l'esprit que vous devez également expérimenter beaucoup pour voir ce qui fonctionne le mieux. Essayez différentes instructions avec différents mots-clés, contextes et données et voyez ce qui fonctionne le mieux pour votre cas d'utilisation et votre tâche. En général, plus le contexte est spécifique et pertinent par rapport à la tâche que vous essayez d'accomplir, mieux c'est. Nous aborderons l'importance de l'échantillonnage et de l'ajout de contexte dans les prochains guides.
*Prompt:*
D'autres recommandent de placer les instructions au début de l'invite. Il est également recommandé d'utiliser un séparateur clair comme "###" pour séparer l'instruction du contexte.
Par exemple :
*Prompt :*
```
### Instruction ###
Translate the text below to Spanish:
Traduit le texte suivant en Espagnol:
Text: "hello!"
Texte: "hello!"
```
*Output:*
*Output :*
```
¡Hola!
```
### Specificity
Be very specific about the instruction and task you want the model to perform. The more descriptive and detailed the prompt is, the better the results. This is particularly important when you have a desired outcome or style of generation you are seeking. There aren't specific tokens or keywords that lead to better results. It's more important to have a good format and descriptive prompt. In fact, providing examples in the prompt is very effective to get desired output in specific formats.
### Spécificité
Soyez très précis quant à l'instruction et à la tâche que vous voulez que le modèle accomplisse. Plus la prompt est descriptive et détaillée, meilleurs sont les résultats. Ceci est particulièrement important lorsque vous recherchez un résultat ou un style de génération. Il n'y a pas de tokens ou de mots-clés spécifiques qui conduisent à de meilleurs résultats. Il est plus important d'avoir un bon format et une prompt descriptive. En fait, fournir des exemples dans la prompt est très efficace pour obtenir les résultats souhaités dans des formats spécifiques.
When designing prompts you should also keep in mind the length of the prompt as there are limitations regarding how long this can be. Thinking about how specific and detailed you should be is something to consider. Including too many unnecessary details is not necessarily a good approach. The details should be relevant and contribute to the task at hand. This is something you will need to experiment with a lot. We encourage a lot of experimentation and iteration to optimize prompts for your applications.
Lors de la conception des prompts, vous devez également garder à l'esprit la longueur de la prompt, car il existe des limites à cette longueur. Il convient de réfléchir au degré de spécificité et de détail à apporter. Inclure trop de détails inutiles n'est pas nécessairement une bonne approche. Les détails doivent être pertinents et contribuer à la tâche à accomplir. C'est un point sur lequel vous devrez beaucoup expérimenter. Nous encourageons l'expérimentation et l'itération afin d'optimiser vos prompts pour vos applications.
As an example, let's try a simple prompt to extract specific information from a piece of text.
À titre d'exemple, essayons une simple prompt pour extraire des informations spécifiques d'un texte :
*Prompt:*
*Prompt :*
```
Extract the name of places in the following text.
Extrais le nom des lieux dans le texte suivant.
Desired format:
Place: <comma_separated_list_of_company_names>
Format voulu :
Place : <comma_separated_list_of_company_names>
Input: "Although these developments are encouraging to researchers, much is still a mystery. “We often have a black box between the brain and the effect we see in the periphery,” says Henrique Veiga-Fernandes, a neuroimmunologist at the Champalimaud Centre for the Unknown in Lisbon. “If we want to use it in the therapeutic context, we actually need to understand the mechanism.""
Input : "Bien que ces développements soient encourageants pour les chercheurs, beaucoup de choses restent encore mystérieuses. "Nous avons souvent une boîte noire entre le cerveau et l'effet que nous observons à la périphérie", explique Henrique Veiga-Fernandes, neuro-immunologiste au Centre Champalimaud pour l'Inconnu à Lisbonne. "Si nous voulons l'utiliser dans un contexte thérapeutique, nous devons comprendre le mécanisme.""
```
*Output:*
*Output :*
```
Place: Champalimaud Centre for the Unknown, Lisbon
Place : Centre Champalimaud pour l'Inconnu, Lisbonne
```
Input text is obtained from [this Nature article](https://www.nature.com/articles/d41586-023-00509-z).
Le texte est obtenu à partir [de cet article de Nature](https://www.nature.com/articles/d41586-023-00509-z) et traduit en français.
### Avoid Impreciseness
### Éviter les imprécisions
Given the tips above about being detailed and improving format, it's easy to fall into the trap of wanting to be too clever about prompts and potentially creating imprecise descriptions. It's often better to be specific and direct. The analogy here is very similar to effective communication -- the more direct, the more effective the message gets across.
Compte tenu des conseils ci-dessus concernant les détails et l'amélioration de la formulation, il est facile de tomber dans le piège de vouloir être trop astucieux et de risquer de créer des descriptions imprécises. Il est souvent préférable d'être précis et direct. L'analogie ici est très similaire à une communication efficace : plus le message est direct, plus il passe efficacement.
For example, you might be interested in learning the concept of prompt engineering. You might try something like:
Par exemple, vous pourriez être intéressé par l'apprentissage du prompt engineering. Vous pourriez essayer quelque chose comme :
```
Explain the concept prompt engineering. Keep the explanation short, only a few sentences, and don't be too descriptive.
Expliquez le concept de prompt engineering. L'explication doit être courte, quelques phrases seulement, et ne pas être trop descriptive.
```
It's not clear from the prompt above how many sentences to use and what style. You might still somewhat get good responses with the above prompts but the better prompt would be one that is very specific, concise, and to the point. Something like:
Le nombre de phrases à utiliser et le style à adopter ne sont pas clairement indiqués dans la prompt ci-dessus. Vous pourriez encore obtenir de bonnes réponses avec les prompts ci-dessus, mais la meilleure prompt serait très spécifique, concise et directe. Quelque chose comme :
```
Use 2-3 sentences to explain the concept of prompt engineering to a high school student.
Expliquez en 2-3 phrases le concept d'ingénierie rapide à un lycéen.
```
### To do or not to do?
Another common tip when designing prompts is to avoid saying what not to do but say what to do instead. This encourages more specificity and focuses on the details that lead to good responses from the model.
### Faire ou ne pas faire ?
Un autre conseil courant lors de la conception de prompts est d'éviter de dire ce qu'il ne faut pas faire, mais de dire plutôt ce qu'il faut faire. Cela encourage une plus grande spécificité et met l'accent sur les détails qui conduisent à de bonnes réponses de la part du modèle.
Here is an example of a movie recommendation chatbot failing at exactly what I don't want it to do because of how I wrote the instruction -- focusing on what not to do.
Voici un exemple d'un chatbot de recommandation de films qui échoue exactement dans ce que je ne veux pas qu'il fasse à cause de la façon dont j'ai rédigé l'instruction - en mettant l'accent sur ce qu'il ne faut pas faire.
*Prompt:*
*Prompt :*
```
The following is an agent that recommends movies to a customer. DO NOT ASK FOR INTERESTS. DO NOT ASK FOR PERSONAL INFORMATION.
Voici un agent qui recommande des films à un client. NE PAS DEMANDER LES CENTRES D'INTÉRÊT. NE PAS DEMANDER D'INFORMATIONS PERSONNELLES.
Customer: Please recommend a movie based on my interests.
Agent:
Client : Veuillez me recommander un film en fonction de mes centres d'intérêt.
Agent :
```
*Output:*
*Output :*
```
Bien sûr, je peux vous recommander un film en fonction de vos centres d'intérêt. Quel genre de film aimeriez-vous regarder ? Préférez-vous les films d'action, de comédie, romantiques ou autre chose ?
```
Voici une meilleure prompt:
*Prompt:*
*Prompt :*
```
Le suivant est un agent qui recommande des films à un client. L'agent est responsable de recommander un film parmi les films tendances mondiaux les plus populaires. Il doit éviter de demander aux utilisateurs leurs préférences et de demander des informations personnelles. Si l'agent n'a pas de film à recommander, il doit répondre "Désolé, je n'ai pas pu trouver de film à recommander aujourd'hu
Le suivant est un agent qui recommande des films à un client. L'agent est responsable de recommander un film parmi les films tendances mondiaux les plus populaires. Il doit éviter de demander aux utilisateurs leurs préférences et de demander des informations personnelles. Si l'agent n'a pas de film à recommander, il doit répondre "Désolé, je n'ai pas pu trouver de film à recommander aujourd'hui".
Client : Recommander un film en fonction de mes centres d'intérêt.
Agent:
@ -105,4 +109,4 @@ Agent:
Désolé, je n'ai pas d'informations sur vos préférences. Cependant, voici une liste des films les plus populaires dans le monde en ce moment : [liste de films]. J'espère que vous y trouverez quelque chose qui vous plaira !
```
Certains des exemples ci-dessus ont été adoptés à partir de l'article ["Best practices for prompt engineering with OpenAI API" article.](https://help.openai.com/en/articles/6654000-best-practices-for-prompt-engineering-with-openai-api)
Certains des exemples ci-dessus ont été adoptés à partir de l'article ["Best practices for prompt engineering with OpenAI API" article.](https://help.openai.com/en/articles/6654000-best-practices-for-prompt-engineering-with-openai-api)

Loading…
Cancel
Save