langchain/docs/getting_started/chains.md
Harrison Chase b1b6b27c5f
Harrison/redo docs (#130)
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2022-11-13 20:13:23 -08:00

1.3 KiB

Using Chains

Calling an LLM is a great first step, but it's just the beginning. Normally when you use an LLM in an application, you are not sending user input directly to the LLM. Instead, you are probably taking user input and constructing a prompt, and then sending that to the LLM.

For example, in the previous example, the text we passed in was hardcoded to ask for a name for a company that made colorful socks. In this imaginary service, what we would want to do is take only the user input describing what the company does, and then format the prompt with that information.

This is easy to do with LangChain!

First lets define the prompt:

from langchain.prompts import Prompt
prompt = Prompt(
    input_variables=["product"], 
    template="What is a good name for a company that makes {product}?",
)

We can now create a very simple chain that will take user input, format the prompt with it, and then send it to the LLM:

from langchain.chains import LLMChain
chain = LLMChain(llm=llm, prompt=prompt)

Now we can run that can only specifying the product!

chain.run("colorful socks")

There we go! There's the first chain.

That is it for the Getting Started example. As a next step, we would suggest checking out the more complex chains in the Demos section