From 44dae6936b1c97b425164fff111fa0306270e388 Mon Sep 17 00:00:00 2001 From: Bagatur <22008038+baskaryan@users.noreply.github.com> Date: Tue, 24 Oct 2023 11:53:55 -0400 Subject: [PATCH] Docs: Add LCEL to chains/foundational/llm (#12213) --- .../chains/foundational/llm_chain.ipynb | 371 ++++++++++++++++++ .../modules/chains/foundational/llm_chain.mdx | 171 -------- 2 files changed, 371 insertions(+), 171 deletions(-) create mode 100644 docs/docs/modules/chains/foundational/llm_chain.ipynb delete mode 100644 docs/docs/modules/chains/foundational/llm_chain.mdx diff --git a/docs/docs/modules/chains/foundational/llm_chain.ipynb b/docs/docs/modules/chains/foundational/llm_chain.ipynb new file mode 100644 index 0000000000..a331a79eba --- /dev/null +++ b/docs/docs/modules/chains/foundational/llm_chain.ipynb @@ -0,0 +1,371 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "7d46647d-638f-497e-b51a-52bf8dd76e39", + "metadata": {}, + "source": [ + "# LLM\n", + "\n", + "The most common type of chaining in any LLM application is combining a prompt template with an LLM and optionally an output parser.\n", + "\n", + "The recommended way to do this is using LangChain Expression Language. We also continue to support the legacy `LLMChain`, which is a single class for composing these three components." + ] + }, + { + "cell_type": "markdown", + "id": "0ad20b88-f2e8-4ba0-b8e6-1892ab4d2190", + "metadata": {}, + "source": [ + "## Using LCEL\n", + "\n", + "`BasePromptTemplate`, `BaseLanguageModel` and `BaseOutputParser` all implement the `Runnable` interface and are designed to be piped into one another, making LCEL composition very easy:" + ] + }, + { + "cell_type": "code", + "execution_count": 13, + "id": "92ad7c9d-a1d2-49bd-a4a3-0f6f0fd1656b", + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "'VibrantSocks'" + ] + }, + "execution_count": 13, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "from langchain.prompts import PromptTemplate\n", + "from langchain.chat_models import ChatOpenAI\n", + "from langchain.schema import StrOutputParser\n", + "\n", + "prompt = PromptTemplate.from_template(\"What is a good name for a company that makes {product}?\")\n", + "runnable = prompt | ChatOpenAI() | StrOutputParser()\n", + "runnable.invoke({\"product\": \"colorful socks\"})" + ] + }, + { + "cell_type": "markdown", + "id": "784d8083-a2c8-4172-92b8-0bd0d74f032a", + "metadata": {}, + "source": [ + "Head to the [LCEL](/docs/expression_language) section for more on the interface, built-in features, and cookbook examples." + ] + }, + { + "cell_type": "markdown", + "id": "efee07bb-fc45-4e06-999f-a776e6d53333", + "metadata": {}, + "source": [ + "## [Legacy] LLMChain\n", + "An `LLMChain` is a simple chain that adds some functionality around language models. It is used widely throughout LangChain, including in other chains and agents.\n", + "\n", + "An `LLMChain` consists of a `PromptTemplate` and a language model (either an LLM or chat model). It formats the prompt template using the input key values provided (and also memory key values, if available), passes the formatted string to LLM and returns the LLM output.\n", + "\n", + "### Get started" + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "id": "fc0b7d6c-b808-48d9-bdb5-818ab4a1ccca", + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "{'product': 'colorful socks', 'text': '\\n\\nSocktastic!'}" + ] + }, + "execution_count": 4, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "from langchain.prompts import PromptTemplate\n", + "from langchain.llms import OpenAI\n", + "from langchain.chains import LLMChain\n", + "\n", + "prompt_template = \"What is a good name for a company that makes {product}?\"\n", + "\n", + "llm = OpenAI(temperature=0)\n", + "llm_chain = LLMChain(\n", + " llm=llm,\n", + " prompt=PromptTemplate.from_template(prompt_template)\n", + ")\n", + "llm_chain(\"colorful socks\")" + ] + }, + { + "cell_type": "markdown", + "id": "040634f0-fe60-4b0e-b3f6-e9c15146e2cd", + "metadata": {}, + "source": [ + "### Additional ways of running `LLMChain`\n", + "\n", + "Aside from `__call__` and `run` methods shared by all `Chain` object, `LLMChain` offers a few more ways of calling the chain logic:\n", + "\n", + "- `apply` allows you run the chain against a list of inputs:" + ] + }, + { + "cell_type": "code", + "execution_count": 5, + "id": "8cd8dd72-6d5a-488f-80a6-1a9324c743e8", + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "[{'text': '\\n\\nSocktastic!'},\n", + " {'text': '\\n\\nTechCore Solutions.'},\n", + " {'text': '\\n\\nFootwear Factory.'}]" + ] + }, + "execution_count": 5, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "input_list = [\n", + " {\"product\": \"socks\"},\n", + " {\"product\": \"computer\"},\n", + " {\"product\": \"shoes\"}\n", + "]\n", + "llm_chain.apply(input_list)" + ] + }, + { + "cell_type": "markdown", + "id": "18624d04-474a-425e-bcf3-58748b747e08", + "metadata": {}, + "source": [ + "- `generate` is similar to `apply`, except it return an `LLMResult` instead of string. `LLMResult` often contains useful generation such as token usages and finish reason." + ] + }, + { + "cell_type": "code", + "execution_count": 6, + "id": "67e72139-686d-40eb-9c1e-4342d3b1abfe", + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "LLMResult(generations=[[Generation(text='\\n\\nSocktastic!', generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text='\\n\\nTechCore Solutions.', generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text='\\n\\nFootwear Factory.', generation_info={'finish_reason': 'stop', 'logprobs': None})]], llm_output={'token_usage': {'completion_tokens': 19, 'prompt_tokens': 36, 'total_tokens': 55}, 'model_name': 'text-davinci-003'}, run=[RunInfo(run_id=UUID('9a423a43-6d35-4e8f-9aca-cacfc8e0dc49')), RunInfo(run_id=UUID('a879c077-b521-461c-8f29-ba63adfc327c')), RunInfo(run_id=UUID('40b892fa-e8c2-47d0-a309-4f7a4ed5b64a'))])" + ] + }, + "execution_count": 6, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "llm_chain.generate(input_list)" + ] + }, + { + "cell_type": "markdown", + "id": "0480da3a-865d-4ec5-9366-e29c3967fef3", + "metadata": {}, + "source": [ + "- `predict` is similar to `run` method except that the input keys are specified as keyword arguments instead of a Python dict." + ] + }, + { + "cell_type": "code", + "execution_count": 7, + "id": "f4afb8a4-9113-4082-85cb-55a2d406c99a", + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "'\\n\\nSocktastic!'" + ] + }, + "execution_count": 7, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "# Single input example\n", + "llm_chain.predict(product=\"colorful socks\")" + ] + }, + { + "cell_type": "code", + "execution_count": 8, + "id": "e58eaab1-4db4-43cb-b523-7b3380332cad", + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "'\\n\\nQ: What did the duck say when his friend died?\\nA: Quack, quack, goodbye.'" + ] + }, + "execution_count": 8, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "# Multiple inputs example\n", + "template = \"\"\"Tell me a {adjective} joke about {subject}.\"\"\"\n", + "prompt = PromptTemplate(template=template, input_variables=[\"adjective\", \"subject\"])\n", + "llm_chain = LLMChain(prompt=prompt, llm=OpenAI(temperature=0))\n", + "\n", + "llm_chain.predict(adjective=\"sad\", subject=\"ducks\")" + ] + }, + { + "cell_type": "markdown", + "id": "63f02d9e-6470-41d3-b91c-b064baf84733", + "metadata": {}, + "source": [ + "### Parsing the outputs\n", + "\n", + "By default, `LLMChain` does not parse the output even if the underlying `prompt` object has an output parser. If you would like to apply that output parser on the LLM output, use `predict_and_parse` instead of `predict` and `apply_and_parse` instead of `apply`.\n", + "\n", + "With `predict`:" + ] + }, + { + "cell_type": "code", + "execution_count": 9, + "id": "134126ca-2f1c-4829-94ba-810d91c92138", + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "'\\n\\nRed, orange, yellow, green, blue, indigo, violet'" + ] + }, + "execution_count": 9, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "from langchain.output_parsers import CommaSeparatedListOutputParser\n", + "\n", + "output_parser = CommaSeparatedListOutputParser()\n", + "template = \"\"\"List all the colors in a rainbow\"\"\"\n", + "prompt = PromptTemplate(template=template, input_variables=[], output_parser=output_parser)\n", + "llm_chain = LLMChain(prompt=prompt, llm=llm)\n", + "\n", + "llm_chain.predict()" + ] + }, + { + "cell_type": "markdown", + "id": "7a46f1e8-daaf-43d6-8045-9b187655631b", + "metadata": {}, + "source": [ + "With `predict_and_parse`:" + ] + }, + { + "cell_type": "code", + "execution_count": 10, + "id": "7ef9b74d-7ef5-4b80-80cc-f8226f79259b", + "metadata": {}, + "outputs": [ + { + "name": "stderr", + "output_type": "stream", + "text": [ + "/Users/bagatur/langchain/libs/langchain/langchain/chains/llm.py:280: UserWarning: The predict_and_parse method is deprecated, instead pass an output parser directly to LLMChain.\n", + " warnings.warn(\n" + ] + }, + { + "data": { + "text/plain": [ + "['Red', 'orange', 'yellow', 'green', 'blue', 'indigo', 'violet']" + ] + }, + "execution_count": 10, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "llm_chain.predict_and_parse()" + ] + }, + { + "cell_type": "markdown", + "id": "93446f7f-0a2d-4fc5-99a1-a26cc0605b4b", + "metadata": {}, + "source": [ + "### Initialize from string\n", + "\n", + "You can also construct an `LLMChain` from a string template directly." + ] + }, + { + "cell_type": "code", + "execution_count": 11, + "id": "7e324174-e8ab-4095-87cb-17874a058da9", + "metadata": {}, + "outputs": [], + "source": [ + "template = \"\"\"Tell me a {adjective} joke about {subject}.\"\"\"\n", + "llm_chain = LLMChain.from_string(llm=llm, template=template)" + ] + }, + { + "cell_type": "code", + "execution_count": 12, + "id": "a4f10407-6519-4174-89fe-e7507765f1ae", + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "'\\n\\nQ: What did the duck say when his friend died?\\nA: Quack, quack, goodbye.'" + ] + }, + "execution_count": 12, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "llm_chain.predict(adjective=\"sad\", subject=\"ducks\")" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.9.1" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} diff --git a/docs/docs/modules/chains/foundational/llm_chain.mdx b/docs/docs/modules/chains/foundational/llm_chain.mdx deleted file mode 100644 index a1d7c0bea3..0000000000 --- a/docs/docs/modules/chains/foundational/llm_chain.mdx +++ /dev/null @@ -1,171 +0,0 @@ -# LLM - -An `LLMChain` is a simple chain that adds some functionality around language models. It is used widely throughout LangChain, including in other chains and agents. - -An `LLMChain` consists of a `PromptTemplate` and a language model (either an LLM or chat model). It formats the prompt template using the input key values provided (and also memory key values, if available), passes the formatted string to LLM and returns the LLM output. - -## Get started - -```python -from langchain.prompts import PromptTemplate -from langchain.llms import OpenAI -from langchain.chains import LLMChain - -prompt_template = "What is a good name for a company that makes {product}?" - -llm = OpenAI(temperature=0) -llm_chain = LLMChain( - llm=llm, - prompt=PromptTemplate.from_template(prompt_template) -) -llm_chain("colorful socks") -``` - - - -``` - {'product': 'colorful socks', 'text': '\n\nSocktastic!'} -``` - - - -## Additional ways of running `LLMChain` - -Aside from `__call__` and `run` methods shared by all `Chain` object, `LLMChain` offers a few more ways of calling the chain logic: - -- `apply` allows you run the chain against a list of inputs: - - -```python -input_list = [ - {"product": "socks"}, - {"product": "computer"}, - {"product": "shoes"} -] - -llm_chain.apply(input_list) -``` - - - -``` - [{'text': '\n\nSocktastic!'}, - {'text': '\n\nTechCore Solutions.'}, - {'text': '\n\nFootwear Factory.'}] -``` - - - -- `generate` is similar to `apply`, except it return an `LLMResult` instead of string. `LLMResult` often contains useful generation such as token usages and finish reason. - - -```python -llm_chain.generate(input_list) -``` - - - -``` - LLMResult(generations=[[Generation(text='\n\nSocktastic!', generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text='\n\nTechCore Solutions.', generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text='\n\nFootwear Factory.', generation_info={'finish_reason': 'stop', 'logprobs': None})]], llm_output={'token_usage': {'prompt_tokens': 36, 'total_tokens': 55, 'completion_tokens': 19}, 'model_name': 'text-davinci-003'}) -``` - - - -- `predict` is similar to `run` method except that the input keys are specified as keyword arguments instead of a Python dict. - - -```python -# Single input example -llm_chain.predict(product="colorful socks") -``` - - - -``` - '\n\nSocktastic!' -``` - - - - -```python -# Multiple inputs example - -template = """Tell me a {adjective} joke about {subject}.""" -prompt = PromptTemplate(template=template, input_variables=["adjective", "subject"]) -llm_chain = LLMChain(prompt=prompt, llm=OpenAI(temperature=0)) - -llm_chain.predict(adjective="sad", subject="ducks") -``` - - - -``` - '\n\nQ: What did the duck say when his friend died?\nA: Quack, quack, goodbye.' -``` - - - -## Parsing the outputs - -By default, `LLMChain` does not parse the output even if the underlying `prompt` object has an output parser. If you would like to apply that output parser on the LLM output, use `predict_and_parse` instead of `predict` and `apply_and_parse` instead of `apply`. - -With `predict`: - - -```python -from langchain.output_parsers import CommaSeparatedListOutputParser - -output_parser = CommaSeparatedListOutputParser() -template = """List all the colors in a rainbow""" -prompt = PromptTemplate(template=template, input_variables=[], output_parser=output_parser) -llm_chain = LLMChain(prompt=prompt, llm=llm) - -llm_chain.predict() -``` - - - -``` - '\n\nRed, orange, yellow, green, blue, indigo, violet' -``` - - - -With `predict_and_parse`: - - -```python -llm_chain.predict_and_parse() -``` - - - -``` - ['Red', 'orange', 'yellow', 'green', 'blue', 'indigo', 'violet'] -``` - - - -## Initialize from string - -You can also construct an `LLMChain` from a string template directly. - - -```python -template = """Tell me a {adjective} joke about {subject}.""" -llm_chain = LLMChain.from_string(llm=llm, template=template) -``` - - -```python -llm_chain.predict(adjective="sad", subject="ducks") -``` - - - -``` - '\n\nQ: What did the duck say when his friend died?\nA: Quack, quack, goodbye.' -``` - -