forked from Archives/langchain
38 lines
1.3 KiB
Markdown
38 lines
1.3 KiB
Markdown
# LLM Chains
|
|
|
|
Calling an LLM is a great first step, but it's just the beginning.
|
|
Normally when you use an LLM in an application, you are not sending user input directly to the LLM.
|
|
Instead, you are probably taking user input and constructing a prompt, and then sending that to the LLM.
|
|
|
|
For example, in the previous example, the text we passed in was hardcoded to ask for a name for a company that made colorful socks.
|
|
In this imaginary service, what we would want to do is take only the user input describing what the company does, and then format the prompt with that information.
|
|
|
|
This is easy to do with LangChain!
|
|
|
|
First lets define the prompt:
|
|
|
|
```python
|
|
from langchain.prompts import PromptTemplate
|
|
|
|
prompt = PromptTemplate(
|
|
input_variables=["product"],
|
|
template="What is a good name for a company that makes {product}?",
|
|
)
|
|
```
|
|
|
|
We can now create a very simple chain that will take user input, format the prompt with it, and then send it to the LLM:
|
|
|
|
```python
|
|
from langchain.chains import LLMChain
|
|
chain = LLMChain(llm=llm, prompt=prompt)
|
|
```
|
|
|
|
Now we can run that can only specifying the product!
|
|
|
|
```python
|
|
chain.run("colorful socks")
|
|
```
|
|
|
|
There we go! There's the first chain - an LLM Chain.
|
|
This is one of the simpler types of chains, but understanding how it works will set you up well for working with more complex chains.
|