From b697bbb5b596e5e8ef845ed90fbe1b2e07098cff Mon Sep 17 00:00:00 2001 From: ykerus <48921025+ykerus@users.noreply.github.com> Date: Tue, 20 Jun 2023 07:03:38 +0200 Subject: [PATCH] Remove backticks without clear purpose from docs (#6442) #### Description - Removed two backticks surrounding the phrase "chat messages as" - This phrase stood out among other formatted words/phrases such as `prompt`, `role`, `PromptTemplate`, etc., which all seem to have a clear function. - `chat messages as`, formatted as such, confused me while reading, leading me to believe the backticks were misplaced. #### Who can review? @hwchase17 --- .../modules/model_io/prompts/prompt_templates/get_started.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/snippets/modules/model_io/prompts/prompt_templates/get_started.mdx b/docs/snippets/modules/model_io/prompts/prompt_templates/get_started.mdx index 8e4c8113..334ab84a 100644 --- a/docs/snippets/modules/model_io/prompts/prompt_templates/get_started.mdx +++ b/docs/snippets/modules/model_io/prompts/prompt_templates/get_started.mdx @@ -69,7 +69,7 @@ You can create custom prompt templates that format the prompt in any way you wan ## Chat prompt template -[Chat Models](../models/chat) take a list of `chat messages as` input - this list commonly referred to as a `prompt`. +[Chat Models](../models/chat) take a list of chat messages as input - this list commonly referred to as a `prompt`. These chat messages differ from raw string (which you would pass into a [LLM](/docs/modules/model_io/models/llms) model) in that every message is associated with a `role`. For example, in OpenAI [Chat Completion API](https://platform.openai.com/docs/guides/chat/introduction), a chat message can be associated with the AI, human or system role. The model is supposed to follow instruction from system chat message more closely.