fix(typo): Clarify the point of `llm_chain` (#7593)

Fixes a typo introduced in
https://github.com/hwchase17/langchain/pull/7080 by @hwchase17.

In the example (visible on [the online
documentation](https://api.python.langchain.com/en/latest/chains/langchain.chains.conversational_retrieval.base.ConversationalRetrievalChain.html#langchain-chains-conversational-retrieval-base-conversationalretrievalchain)),
the `llm_chain` variable is unused as opposed to being used for the
question generator. This change makes it clearer.
pull/7607/head
Samuel ROZE 1 year ago committed by GitHub
parent 6cdd4b5edc
commit f3c9bf5e4b
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

@ -245,11 +245,11 @@ class ConversationalRetrievalChain(BaseConversationalRetrievalChain):
)
prompt = PromptTemplate.from_template(template)
llm = OpenAI()
llm_chain = LLMChain(llm=llm, prompt=prompt)
question_generator_chain = LLMChain(llm=llm, prompt=prompt)
chain = ConversationalRetrievalChain(
combine_docs_chain=combine_docs_chain,
retriever=retriever,
question_generator=question_generator,
question_generator=question_generator_chain,
)
"""

Loading…
Cancel
Save