Docs: Fix spelling and grammar on Concepts page (#15364)

- **Description:**  Fix a few spelling and grammar issues
  - **Issue:** NA
  - **Dependencies:** NA
  - **Twitter handle:** @donovancmuller
 
<!-- Thank you for contributing to LangChain!

Please title your PR "<package>: <description>", where <package> is
whichever of langchain, community, core, experimental, etc. is being
modified.

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes if applicable,
  - **Dependencies:** any dependencies required for this change,
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` from the root
of the package you've modified to check this locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc: https://python.langchain.com/docs/contributing/

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->
pull/14852/head
Donovan Muller 8 months ago committed by GitHub
parent 768334b6a4
commit a7f0b65d26
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

@ -24,12 +24,12 @@ they take a list of chat messages as input and they return an AI message as outp
These two API types have pretty different input and output schemas. This means that best way to interact with them may be quite different. Although LangChain makes it possible to treat them interchangeably, that doesn't mean you **should**. In particular, the prompting strategies for LLMs vs ChatModels may be quite different. This means that you will want to make sure the prompt you are using is designed for the model type you are working with.
Additionally, not all models are the same. Different models have different prompting strategies that work best for them. For example, Anthropic's models work best with XML while OpenAI's work best with JSON. This means that the prompt you use for one model may not transfer to other ones. LangChain provides a lot of default prompts, however these are not garunteed to work well with the model are you using. Historically speaking, most prompts work well with OpenAI but are not heavily tested on other models. This is something we are working to address, but it is something you should keep in mind.
Additionally, not all models are the same. Different models have different prompting strategies that work best for them. For example, Anthropic's models work best with XML while OpenAI's work best with JSON. This means that the prompt you use for one model may not transfer to other ones. LangChain provides a lot of default prompts, however these are not guaranteed to work well with the model are you using. Historically speaking, most prompts work well with OpenAI but are not heavily tested on other models. This is something we are working to address, but it is something you should keep in mind.
## Messages
ChatModels take a list of messages as input and return a message. There are a few different types of messages. All messages have a `role` and a `content` property. The `role` describes WHO is saying the message. LangChain has different message classes for different roles. The `content` property described the content of the message. This can be a few different things:
ChatModels take a list of messages as input and return a message. There are a few different types of messages. All messages have a `role` and a `content` property. The `role` describes WHO is saying the message. LangChain has different message classes for different roles. The `content` property describes the content of the message. This can be a few different things:
- A string (most models are this way)
- A List of dictionaries (this is used for multi-modal input, where the dictionary contains information about that input type and that input location)
@ -60,11 +60,11 @@ This represents the result of a tool call. This is distinct from a FunctionMessa
## Prompts
The inputs to language models are often called prompts. Oftentimes, the user input from your app is not the direct input to the model. Rather, their input is transformed in some way to product the string or list of messages that does go into the model. The objects that take user input and transform it into the final string or messages are known as "Prompt Templates". LangChain provides several abstractions to make working with prompts easier.
The inputs to language models are often called prompts. Oftentimes, the user input from your app is not the direct input to the model. Rather, their input is transformed in some way to produce the string or list of messages that does go into the model. The objects that take user input and transform it into the final string or messages are known as "Prompt Templates". LangChain provides several abstractions to make working with prompts easier.
### PromptValue
ChatModels and LLMs take different input types. PromptValue is class designed to be interoptable between the two. It exposes a method to be cast to a string (to work with LLMs) and another to be cast to a list of messages (to work with ChatModels).
ChatModels and LLMs take different input types. PromptValue is a class designed to be interoperable between the two. It exposes a method to be cast to a string (to work with LLMs) and another to be cast to a list of messages (to work with ChatModels).
### PromptTemplate

Loading…
Cancel
Save