mirror of
https://github.com/hwchase17/langchain
synced 2024-10-29 17:07:25 +00:00
40 lines
1.3 KiB
Markdown
40 lines
1.3 KiB
Markdown
# Using Chains
|
|
|
|
Calling an LLM is a great first step, but it's just the beginning.
|
|
Normally when you use an LLM in an application, you are not sending user input directly to the LLM.
|
|
Instead, you are probably taking user input and constructing a prompt, and then sending that to the LLM.
|
|
|
|
For example, in the previous example, the text we passed in was hardcoded to ask for a name for a company that made colorful socks.
|
|
In this imaginary service, what we would want to do is take only the user input describing what the company does, and then format the prompt with that information.
|
|
|
|
This is easy to do with LangChain!
|
|
|
|
First lets define the prompt:
|
|
|
|
```python
|
|
from langchain.prompts import PromptTemplate
|
|
|
|
prompt = PromptTemplate(
|
|
input_variables=["product"],
|
|
template="What is a good name for a company that makes {product}?",
|
|
)
|
|
```
|
|
|
|
We can now create a very simple chain that will take user input, format the prompt with it, and then send it to the LLM:
|
|
|
|
```python
|
|
from langchain.chains import LLMChain
|
|
chain = LLMChain(llm=llm, prompt=prompt)
|
|
```
|
|
|
|
Now we can run that can only specifying the product!
|
|
|
|
```python
|
|
chain.run("colorful socks")
|
|
```
|
|
|
|
There we go! There's the first chain.
|
|
|
|
That is it for the Getting Started example.
|
|
As a next step, we would suggest checking out the more complex chains in the [Demos section](/examples/demos)
|