Change Chain Docs (#3537)

Co-authored-by: engkheng <60956360+outday29@users.noreply.github.com>
This commit is contained in:
Zander Chase 2023-04-25 10:51:09 -07:00 committed by GitHub
parent cf71b5d396
commit b49ee372f1
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

View File

@ -26,12 +26,13 @@
"\n", "\n",
"The `LLMChain` is a simple chain that takes in a prompt template, formats it with the user input and returns the response from an LLM.\n", "The `LLMChain` is a simple chain that takes in a prompt template, formats it with the user input and returns the response from an LLM.\n",
"\n", "\n",
"\n",
"To use the `LLMChain`, first create a prompt template." "To use the `LLMChain`, first create a prompt template."
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 2, "execution_count": 1,
"metadata": { "metadata": {
"tags": [] "tags": []
}, },
@ -56,7 +57,7 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 3, "execution_count": 2,
"metadata": { "metadata": {
"tags": [] "tags": []
}, },
@ -67,7 +68,7 @@
"text": [ "text": [
"\n", "\n",
"\n", "\n",
"Cheerful Toes.\n" "SockSplash!\n"
] ]
} }
], ],
@ -88,7 +89,7 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 4, "execution_count": 3,
"metadata": { "metadata": {
"tags": [] "tags": []
}, },
@ -97,7 +98,7 @@
"name": "stdout", "name": "stdout",
"output_type": "stream", "output_type": "stream",
"text": [ "text": [
"Rainbow Footwear Co.\n" "Rainbow Sox Co.\n"
] ]
} }
], ],
@ -130,17 +131,17 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 6, "execution_count": 4,
"metadata": {}, "metadata": {},
"outputs": [ "outputs": [
{ {
"data": { "data": {
"text/plain": [ "text/plain": [
"{'adjective': 'lame',\n", "{'adjective': 'corny',\n",
" 'text': 'Why did the tomato turn red? Because it saw the salad dressing!'}" " 'text': 'Why did the tomato turn red? Because it saw the salad dressing!'}"
] ]
}, },
"execution_count": 6, "execution_count": 4,
"metadata": {}, "metadata": {},
"output_type": "execute_result" "output_type": "execute_result"
} }
@ -153,7 +154,7 @@
" prompt=PromptTemplate.from_template(prompt_template)\n", " prompt=PromptTemplate.from_template(prompt_template)\n",
")\n", ")\n",
"\n", "\n",
"llm_chain(inputs={\"adjective\":\"lame\"})" "llm_chain(inputs={\"adjective\":\"corny\"})"
] ]
}, },
{ {
@ -165,7 +166,7 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 7, "execution_count": 5,
"metadata": {}, "metadata": {},
"outputs": [ "outputs": [
{ {
@ -174,20 +175,69 @@
"{'text': 'Why did the tomato turn red? Because it saw the salad dressing!'}" "{'text': 'Why did the tomato turn red? Because it saw the salad dressing!'}"
] ]
}, },
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"llm_chain(\"corny\", return_only_outputs=True)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"If the `Chain` only outputs one output key (i.e. only has one element in its `output_keys`), you can use `run` method. Note that `run` outputs a string instead of a dictionary."
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"['text']"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# llm_chain only has one output key, so we can use run\n",
"llm_chain.output_keys"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'Why did the tomato turn red? Because it saw the salad dressing!'"
]
},
"execution_count": 7, "execution_count": 7,
"metadata": {}, "metadata": {},
"output_type": "execute_result" "output_type": "execute_result"
} }
], ],
"source": [ "source": [
"llm_chain(\"lame\", return_only_outputs=True)" "llm_chain.run({\"adjective\":\"corny\"})"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"If the `Chain` only takes one input key (i.e. only has one element in its `input_variables`), you can use `run` method. Note that `run` outputs a string instead of a dictionary." "In the case of one input key, you can input the string directly without specifying the input mapping."
] ]
}, },
{ {
@ -198,7 +248,8 @@
{ {
"data": { "data": {
"text/plain": [ "text/plain": [
"'Why did the tomato turn red? Because it saw the salad dressing!'" "{'adjective': 'corny',\n",
" 'text': 'Why did the tomato turn red? Because it saw the salad dressing!'}"
] ]
}, },
"execution_count": 8, "execution_count": 8,
@ -206,42 +257,14 @@
"output_type": "execute_result" "output_type": "execute_result"
} }
], ],
"source": [
"llm_chain.run({\"adjective\":\"lame\"})"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Besides, in the case of one input key, you can input the string directly without specifying the input mapping."
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'adjective': 'lame',\n",
" 'text': 'Why did the tomato turn red? Because it saw the salad dressing!'}"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [ "source": [
"# These two are equivalent\n", "# These two are equivalent\n",
"llm_chain.run({\"adjective\":\"lame\"})\n", "llm_chain.run({\"adjective\":\"corny\"})\n",
"llm_chain.run(\"lame\")\n", "llm_chain.run(\"corny\")\n",
"\n", "\n",
"# These two are also equivalent\n", "# These two are also equivalent\n",
"llm_chain(\"lame\")\n", "llm_chain(\"corny\")\n",
"llm_chain({\"adjective\":\"lame\"})" "llm_chain({\"adjective\":\"corny\"})"
] ]
}, },
{ {
@ -262,7 +285,7 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 11, "execution_count": 9,
"metadata": {}, "metadata": {},
"outputs": [ "outputs": [
{ {
@ -271,7 +294,7 @@
"'The next four colors of a rainbow are green, blue, indigo, and violet.'" "'The next four colors of a rainbow are green, blue, indigo, and violet.'"
] ]
}, },
"execution_count": 11, "execution_count": 9,
"metadata": {}, "metadata": {},
"output_type": "execute_result" "output_type": "execute_result"
} }
@ -309,7 +332,7 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 13, "execution_count": 10,
"metadata": {}, "metadata": {},
"outputs": [ "outputs": [
{ {
@ -336,7 +359,7 @@
"'ChatGPT is an AI language model developed by OpenAI. It is based on the GPT-3 architecture and is capable of generating human-like responses to text prompts. ChatGPT has been trained on a massive amount of text data and can understand and respond to a wide range of topics. It is often used for chatbots, virtual assistants, and other conversational AI applications.'" "'ChatGPT is an AI language model developed by OpenAI. It is based on the GPT-3 architecture and is capable of generating human-like responses to text prompts. ChatGPT has been trained on a massive amount of text data and can understand and respond to a wide range of topics. It is often used for chatbots, virtual assistants, and other conversational AI applications.'"
] ]
}, },
"execution_count": 13, "execution_count": 10,
"metadata": {}, "metadata": {},
"output_type": "execute_result" "output_type": "execute_result"
} }
@ -365,7 +388,7 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 14, "execution_count": 11,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
@ -385,7 +408,7 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 15, "execution_count": 12,
"metadata": {}, "metadata": {},
"outputs": [ "outputs": [
{ {
@ -398,12 +421,12 @@
"\u001b[36;1m\u001b[1;3mRainbow Socks Co.\u001b[0m\n", "\u001b[36;1m\u001b[1;3mRainbow Socks Co.\u001b[0m\n",
"\u001b[33;1m\u001b[1;3m\n", "\u001b[33;1m\u001b[1;3m\n",
"\n", "\n",
"\"Step into Color with Rainbow Socks Co!\"\u001b[0m\n", "\"Step into Color with Rainbow Socks!\"\u001b[0m\n",
"\n", "\n",
"\u001b[1m> Finished chain.\u001b[0m\n", "\u001b[1m> Finished chain.\u001b[0m\n",
"\n", "\n",
"\n", "\n",
"\"Step into Color with Rainbow Socks Co!\"\n" "\"Step into Color with Rainbow Socks!\"\n"
] ]
} }
], ],
@ -434,7 +457,7 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 16, "execution_count": 13,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
@ -468,12 +491,13 @@
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"Now, we can try running the chain that we called." "Now, we can try running the chain that we called.\n",
"\n"
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 17, "execution_count": 14,
"metadata": {}, "metadata": {},
"outputs": [ "outputs": [
{ {
@ -483,7 +507,7 @@
"Concatenated output:\n", "Concatenated output:\n",
"\n", "\n",
"\n", "\n",
"Kaleidoscope Socks.\n", "Socktastic Colors.\n",
"\n", "\n",
"\"Put Some Color in Your Step!\"\n" "\"Put Some Color in Your Step!\"\n"
] ]
@ -531,7 +555,7 @@
"name": "python", "name": "python",
"nbconvert_exporter": "python", "nbconvert_exporter": "python",
"pygments_lexer": "ipython3", "pygments_lexer": "ipython3",
"version": "3.10.10" "version": "3.8.16"
}, },
"vscode": { "vscode": {
"interpreter": { "interpreter": {