From 553a520ab6c55fa6dd484cd0f46554877811666d Mon Sep 17 00:00:00 2001 From: Anubhav Madhav Date: Sat, 16 Mar 2024 04:01:39 +0530 Subject: [PATCH] docs: Fixed Grammar in Considerations of Model I/O Concepts (#19091) Fixed Grammar in Considerations of Model I/O Concepts documentation page - Update concepts.mdx Page Link: https://python.langchain.com/docs/modules/model_io/concepts#considerations - **Description:** Fixed Grammar in Considerations of Model I/O Documentation Page - **Issue:** "to work well with the model are you using" # "to work well with the model you are using" - **Dependencies:** None - **Twitter handle:** @Anubhav_Madhav (https://twitter.com/Anubhav_Madhav) If no one reviews your PR within a few days, please @-mention one of baskaryan, efriis, eyurtsev, hwchase17. Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com> --- docs/docs/modules/model_io/concepts.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/docs/modules/model_io/concepts.mdx b/docs/docs/modules/model_io/concepts.mdx index b9542076df..66ed70a0aa 100644 --- a/docs/docs/modules/model_io/concepts.mdx +++ b/docs/docs/modules/model_io/concepts.mdx @@ -24,7 +24,7 @@ they take a list of chat messages as input and they return an AI message as outp These two API types have pretty different input and output schemas. This means that best way to interact with them may be quite different. Although LangChain makes it possible to treat them interchangeably, that doesn't mean you **should**. In particular, the prompting strategies for LLMs vs ChatModels may be quite different. This means that you will want to make sure the prompt you are using is designed for the model type you are working with. -Additionally, not all models are the same. Different models have different prompting strategies that work best for them. For example, Anthropic's models work best with XML while OpenAI's work best with JSON. This means that the prompt you use for one model may not transfer to other ones. LangChain provides a lot of default prompts, however these are not guaranteed to work well with the model are you using. Historically speaking, most prompts work well with OpenAI but are not heavily tested on other models. This is something we are working to address, but it is something you should keep in mind. +Additionally, not all models are the same. Different models have different prompting strategies that work best for them. For example, Anthropic's models work best with XML while OpenAI's work best with JSON. This means that the prompt you use for one model may not transfer to other ones. LangChain provides a lot of default prompts, however these are not guaranteed to work well with the model you are using. Historically speaking, most prompts work well with OpenAI but are not heavily tested on other models. This is something we are working to address, but it is something you should keep in mind. ## Messages