From 6bc44f3de2213ea8e7a89edb2289b5fa6cca9ecd Mon Sep 17 00:00:00 2001 From: njaci1 Date: Tue, 2 Apr 2024 11:40:43 +0300 Subject: [PATCH] Update basics.en.mdx for readability. edited this sentence to read "You can also define an assistant message to..." instead of "You can also use define an assistant message to.." by removing the word "use". --- pages/introduction/basics.en.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pages/introduction/basics.en.mdx b/pages/introduction/basics.en.mdx index 1a16b36..8e3ece5 100644 --- a/pages/introduction/basics.en.mdx +++ b/pages/introduction/basics.en.mdx @@ -24,7 +24,7 @@ If you are using the OpenAI Playground or any other LLM playground, you can prom -Something to note is that when using the OpenAI chat models like `gpt-3.5-turbo` or `gpt-4`, you can structure your prompt using three different roles: `system`, `user`, and `assistant`. The system message is not required but helps to set the overall behavior of the assistant. The example above only includes a user message which you can use to directly prompt the model. For simplicity, all of the examples, except when it's explicitly mentioned, will use only the `user` message to prompt the `gpt-3.5-turbo` model. The `assistant` message in the example above corresponds to the model response. You can also use define an assistant message to pass examples of the desired behavior you want. You can learn more about working with chat models [here](https://www.promptingguide.ai/models/chatgpt). +Something to note is that when using the OpenAI chat models like `gpt-3.5-turbo` or `gpt-4`, you can structure your prompt using three different roles: `system`, `user`, and `assistant`. The system message is not required but helps to set the overall behavior of the assistant. The example above only includes a user message which you can use to directly prompt the model. For simplicity, all of the examples, except when it's explicitly mentioned, will use only the `user` message to prompt the `gpt-3.5-turbo` model. The `assistant` message in the example above corresponds to the model response. You can also define an assistant message to pass examples of the desired behavior you want. You can learn more about working with chat models [here](https://www.promptingguide.ai/models/chatgpt). You can observe from the prompt example above that the language model responds with a sequence of tokens that make sense given the context `"The sky is"`. The output might be unexpected or far from the task you want to accomplish. In fact, this basic example highlights the necessity to provide more context or instructions on what specifically you want to achieve with the system. This is what prompt engineering is all about.