fix(typo): Clarify the point of llm_chain (#7593)

Fixes a typo introduced in
https://github.com/hwchase17/langchain/pull/7080 by @hwchase17.

In the example (visible on [the online
documentation](https://api.python.langchain.com/en/latest/chains/langchain.chains.conversational_retrieval.base.ConversationalRetrievalChain.html#langchain-chains-conversational-retrieval-base-conversationalretrievalchain)),
the `llm_chain` variable is unused as opposed to being used for the
question generator. This change makes it clearer.
This commit is contained in:
Samuel ROZE 2023-07-12 15:31:00 +01:00 committed by GitHub
parent 6cdd4b5edc
commit f3c9bf5e4b
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

View File

@ -245,11 +245,11 @@ class ConversationalRetrievalChain(BaseConversationalRetrievalChain):
) )
prompt = PromptTemplate.from_template(template) prompt = PromptTemplate.from_template(template)
llm = OpenAI() llm = OpenAI()
llm_chain = LLMChain(llm=llm, prompt=prompt) question_generator_chain = LLMChain(llm=llm, prompt=prompt)
chain = ConversationalRetrievalChain( chain = ConversationalRetrievalChain(
combine_docs_chain=combine_docs_chain, combine_docs_chain=combine_docs_chain,
retriever=retriever, retriever=retriever,
question_generator=question_generator, question_generator=question_generator_chain,
) )
""" """