diff --git a/docs/getting_started/getting_started.md b/docs/getting_started/getting_started.md index 9988b3c484..4d276edd30 100644 --- a/docs/getting_started/getting_started.md +++ b/docs/getting_started/getting_started.md @@ -355,13 +355,15 @@ Similar to LLMs, you can make use of templating by using a `MessagePromptTemplat For convience, there is a `from_template` method exposed on the template. If you were to use this template, this is what it would look like: ```python +from langchain.chat_models import ChatOpenAI from langchain.prompts.chat import ( ChatPromptTemplate, SystemMessagePromptTemplate, - AIMessagePromptTemplate, HumanMessagePromptTemplate, ) +chat = ChatOpenAI(temperature=0) + template="You are a helpful assistant that translates {input_language} to {output_language}." system_message_prompt = SystemMessagePromptTemplate.from_template(template) human_template="{text}" @@ -380,11 +382,10 @@ The `LLMChain` discussed in the above section can be used with chat models as we ```python from langchain.chat_models import ChatOpenAI -from langchain import PromptTemplate, LLMChain +from langchain import LLMChain from langchain.prompts.chat import ( ChatPromptTemplate, SystemMessagePromptTemplate, - AIMessagePromptTemplate, HumanMessagePromptTemplate, )