diff --git a/docs/docs/modules/chains/how_to/async_chain.ipynb b/docs/docs/modules/chains/how_to/async_chain.ipynb index 32fb65009f..1804725a0e 100644 --- a/docs/docs/modules/chains/how_to/async_chain.ipynb +++ b/docs/docs/modules/chains/how_to/async_chain.ipynb @@ -7,9 +7,17 @@ "source": [ "# Async API\n", "\n", - "LangChain provides async support for Chains by leveraging the [asyncio](https://docs.python.org/3/library/asyncio.html) library.\n", - "\n", - "Async methods are currently supported in `LLMChain` (through `arun`, `apredict`, `acall`) and `LLMMathChain` (through `arun` and `acall`), `ChatVectorDBChain`, and [QA chains](/docs/use_cases/question_answering/question_answering). Async support for other chains is on the roadmap." + "LangChain provides async support by leveraging the [asyncio](https://docs.python.org/3/library/asyncio.html) library." + ] + }, + { + "cell_type": "raw", + "id": "0c0f45ed-9cef-4798-975c-d2912a248591", + "metadata": {}, + "source": [ + ":::info\n", + "Async support is built into all `Runnable` objects (the building block of [LangChain Expression Language (LCEL)](/docs/expression_language) by default. Using LCEL is preferred to using `Chain`s. Head to [Interface](/docs/expression_language/interface) for more on the `Runnable` interface.\n", + ":::" ] }, { @@ -125,7 +133,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.11.3" + "version": "3.9.1" } }, "nbformat": 4, diff --git a/docs/docs/modules/chains/how_to/debugging.mdx b/docs/docs/modules/chains/how_to/debugging.mdx deleted file mode 100644 index 5b4b2ea37a..0000000000 --- a/docs/docs/modules/chains/how_to/debugging.mdx +++ /dev/null @@ -1,35 +0,0 @@ -# Debugging chains - -It can be hard to debug a `Chain` object solely from its output as most `Chain` objects involve a fair amount of input prompt preprocessing and LLM output post-processing. - -Setting `verbose` to `True` will print out some internal states of the `Chain` object while it is being ran. - -```python -conversation = ConversationChain( - llm=chat, - memory=ConversationBufferMemory(), - verbose=True -) -conversation.run("What is ChatGPT?") -``` - - - -``` - > Entering new ConversationChain chain... - Prompt after formatting: - The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. - - Current conversation: - - Human: What is ChatGPT? - AI: - - > Finished chain. - - 'ChatGPT is an AI language model developed by OpenAI. It is based on the GPT-3 architecture and is capable of generating human-like responses to text prompts. ChatGPT has been trained on a massive amount of text data and can understand and respond to a wide range of topics. It is often used for chatbots, virtual assistants, and other conversational AI applications.' -``` - - - - diff --git a/docs/docs/modules/chains/how_to/from_hub.ipynb b/docs/docs/modules/chains/how_to/from_hub.ipynb deleted file mode 100644 index e862b527a1..0000000000 --- a/docs/docs/modules/chains/how_to/from_hub.ipynb +++ /dev/null @@ -1,168 +0,0 @@ -{ - "cells": [ - { - "cell_type": "markdown", - "id": "25c90e9e", - "metadata": {}, - "source": [ - "# Loading from LangChainHub\n", - "\n", - "This notebook covers how to load chains from [LangChainHub](https://github.com/hwchase17/langchain-hub)." - ] - }, - { - "cell_type": "code", - "execution_count": 5, - "id": "8b54479e", - "metadata": {}, - "outputs": [], - "source": [ - "from langchain.chains import load_chain\n", - "\n", - "chain = load_chain(\"lc://chains/llm-math/chain.json\")" - ] - }, - { - "cell_type": "code", - "execution_count": 3, - "id": "4828f31f", - "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "\n", - "\n", - "\u001b[1m> Entering new LLMMathChain chain...\u001b[0m\n", - "whats 2 raised to .12\u001b[32;1m\u001b[1;3m\n", - "Answer: 1.0791812460476249\u001b[0m\n", - "\u001b[1m> Finished chain.\u001b[0m\n" - ] - }, - { - "data": { - "text/plain": [ - "'Answer: 1.0791812460476249'" - ] - }, - "execution_count": 3, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "chain.run(\"whats 2 raised to .12\")" - ] - }, - { - "cell_type": "markdown", - "id": "8db72cda", - "metadata": {}, - "source": [ - "Sometimes chains will require extra arguments that were not serialized with the chain. For example, a chain that does question answering over a vector database will require a vector database." - ] - }, - { - "cell_type": "code", - "execution_count": 1, - "id": "aab39528", - "metadata": {}, - "outputs": [], - "source": [ - "from langchain.embeddings.openai import OpenAIEmbeddings\n", - "from langchain.vectorstores import Chroma\n", - "from langchain.text_splitter import CharacterTextSplitter\n", - "from langchain.llms import OpenAI\nfrom langchain.chains import VectorDBQA" - ] - }, - { - "cell_type": "code", - "execution_count": 3, - "id": "16a85d5e", - "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "Running Chroma using direct local API.\n", - "Using DuckDB in-memory for database. Data will be transient.\n" - ] - } - ], - "source": [ - "from langchain.document_loaders import TextLoader\n", - "\n", - "loader = TextLoader(\"../../state_of_the_union.txt\")\n", - "documents = loader.load()\n", - "text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\n", - "texts = text_splitter.split_documents(documents)\n", - "\n", - "embeddings = OpenAIEmbeddings()\n", - "vectorstore = Chroma.from_documents(texts, embeddings)" - ] - }, - { - "cell_type": "code", - "execution_count": 6, - "id": "6a82e91e", - "metadata": {}, - "outputs": [], - "source": [ - "chain = load_chain(\"lc://chains/vector-db-qa/stuff/chain.json\", vectorstore=vectorstore)" - ] - }, - { - "cell_type": "code", - "execution_count": 7, - "id": "efe9b25b", - "metadata": {}, - "outputs": [ - { - "data": { - "text/plain": [ - "\" The president said that Ketanji Brown Jackson is a Circuit Court of Appeals Judge, one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans, and will continue Justice Breyer's legacy of excellence.\"" - ] - }, - "execution_count": 7, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "query = \"What did the president say about Ketanji Brown Jackson\"\n", - "chain.run(query)" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "id": "f910a32f", - "metadata": {}, - "outputs": [], - "source": [] - } - ], - "metadata": { - "kernelspec": { - "display_name": "Python 3 (ipykernel)", - "language": "python", - "name": "python3" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.9.1" - } - }, - "nbformat": 4, - "nbformat_minor": 5 -} diff --git a/docs/docs/modules/chains/how_to/llm.json b/docs/docs/modules/chains/how_to/llm.json deleted file mode 100644 index f843c42d27..0000000000 --- a/docs/docs/modules/chains/how_to/llm.json +++ /dev/null @@ -1,13 +0,0 @@ -{ - "model_name": "text-davinci-003", - "temperature": 0.0, - "max_tokens": 256, - "top_p": 1, - "frequency_penalty": 0, - "presence_penalty": 0, - "n": 1, - "best_of": 1, - "request_timeout": null, - "logit_bias": {}, - "_type": "openai" -} \ No newline at end of file diff --git a/docs/docs/modules/chains/how_to/llm_chain.json b/docs/docs/modules/chains/how_to/llm_chain.json deleted file mode 100644 index 6c907bcd57..0000000000 --- a/docs/docs/modules/chains/how_to/llm_chain.json +++ /dev/null @@ -1,27 +0,0 @@ -{ - "memory": null, - "verbose": true, - "prompt": { - "input_variables": [ - "question" - ], - "output_parser": null, - "template": "Question: {question}\n\nAnswer: Let's think step by step.", - "template_format": "f-string" - }, - "llm": { - "model_name": "text-davinci-003", - "temperature": 0.0, - "max_tokens": 256, - "top_p": 1, - "frequency_penalty": 0, - "presence_penalty": 0, - "n": 1, - "best_of": 1, - "request_timeout": null, - "logit_bias": {}, - "_type": "openai" - }, - "output_key": "text", - "_type": "llm_chain" -} \ No newline at end of file diff --git a/docs/docs/modules/chains/how_to/llm_chain_separate.json b/docs/docs/modules/chains/how_to/llm_chain_separate.json deleted file mode 100644 index 340d813db2..0000000000 --- a/docs/docs/modules/chains/how_to/llm_chain_separate.json +++ /dev/null @@ -1,8 +0,0 @@ -{ - "memory": null, - "verbose": true, - "prompt_path": "prompt.json", - "llm_path": "llm.json", - "output_key": "text", - "_type": "llm_chain" -} \ No newline at end of file diff --git a/docs/docs/modules/chains/how_to/prompt.json b/docs/docs/modules/chains/how_to/prompt.json deleted file mode 100644 index aceb330e2c..0000000000 --- a/docs/docs/modules/chains/how_to/prompt.json +++ /dev/null @@ -1,8 +0,0 @@ -{ - "input_variables": [ - "question" - ], - "output_parser": null, - "template": "Question: {question}\n\nAnswer: Let's think step by step.", - "template_format": "f-string" -} \ No newline at end of file diff --git a/docs/docs/modules/chains/how_to/serialization.ipynb b/docs/docs/modules/chains/how_to/serialization.ipynb deleted file mode 100644 index 555ff1beaa..0000000000 --- a/docs/docs/modules/chains/how_to/serialization.ipynb +++ /dev/null @@ -1,378 +0,0 @@ -{ - "cells": [ - { - "cell_type": "markdown", - "id": "cbe47c3a", - "metadata": {}, - "source": [ - "# Serialization\n", - "This notebook covers how to serialize chains to and from disk. The serialization format we use is JSON or YAML. Currently, only some chains support this type of serialization. We will grow the number of supported chains over time.\n" - ] - }, - { - "cell_type": "markdown", - "id": "e4a8a447", - "metadata": {}, - "source": [ - "## Saving a chain to disk\n", - "First, let's go over how to save a chain to disk. This can be done with the `.save` method, and specifying a file path with a `.json` or `.yaml` extension." - ] - }, - { - "cell_type": "code", - "execution_count": 1, - "id": "26e28451", - "metadata": {}, - "outputs": [], - "source": [ - "from langchain.prompts import PromptTemplate\nfrom langchain.llms import OpenAI\nfrom langchain.chains import LLMChain\n", - "\n", - "template = \"\"\"Question: {question}\n", - "\n", - "Answer: Let's think step by step.\"\"\"\n", - "prompt = PromptTemplate(template=template, input_variables=[\"question\"])\n", - "llm_chain = LLMChain(prompt=prompt, llm=OpenAI(temperature=0), verbose=True)" - ] - }, - { - "cell_type": "code", - "execution_count": 2, - "id": "bfa18e1f", - "metadata": {}, - "outputs": [], - "source": [ - "llm_chain.save(\"llm_chain.json\")" - ] - }, - { - "cell_type": "markdown", - "id": "ea82665d", - "metadata": {}, - "source": [ - "Let's now take a look at what's inside this saved file:" - ] - }, - { - "cell_type": "code", - "execution_count": 3, - "id": "0fd33328", - "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "{\r\n", - " \"memory\": null,\r\n", - " \"verbose\": true,\r\n", - " \"prompt\": {\r\n", - " \"input_variables\": [\r\n", - " \"question\"\r\n", - " ],\r\n", - " \"output_parser\": null,\r\n", - " \"template\": \"Question: {question}\\n\\nAnswer: Let's think step by step.\",\r\n", - " \"template_format\": \"f-string\"\r\n", - " },\r\n", - " \"llm\": {\r\n", - " \"model_name\": \"text-davinci-003\",\r\n", - " \"temperature\": 0.0,\r\n", - " \"max_tokens\": 256,\r\n", - " \"top_p\": 1,\r\n", - " \"frequency_penalty\": 0,\r\n", - " \"presence_penalty\": 0,\r\n", - " \"n\": 1,\r\n", - " \"best_of\": 1,\r\n", - " \"request_timeout\": null,\r\n", - " \"logit_bias\": {},\r\n", - " \"_type\": \"openai\"\r\n", - " },\r\n", - " \"output_key\": \"text\",\r\n", - " \"_type\": \"llm_chain\"\r\n", - "}" - ] - } - ], - "source": [ - "!cat llm_chain.json" - ] - }, - { - "cell_type": "markdown", - "id": "2012c724", - "metadata": {}, - "source": [ - "## Loading a chain from disk\n", - "We can load a chain from disk by using the `load_chain` method." - ] - }, - { - "cell_type": "code", - "execution_count": 4, - "id": "342a1974", - "metadata": {}, - "outputs": [], - "source": [ - "from langchain.chains import load_chain" - ] - }, - { - "cell_type": "code", - "execution_count": 5, - "id": "394b7da8", - "metadata": {}, - "outputs": [], - "source": [ - "chain = load_chain(\"llm_chain.json\")" - ] - }, - { - "cell_type": "code", - "execution_count": 6, - "id": "20d99787", - "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "\n", - "\n", - "\u001b[1m> Entering new LLMChain chain...\u001b[0m\n", - "Prompt after formatting:\n", - "\u001b[32;1m\u001b[1;3mQuestion: whats 2 + 2\n", - "\n", - "Answer: Let's think step by step.\u001b[0m\n", - "\n", - "\u001b[1m> Finished chain.\u001b[0m\n" - ] - }, - { - "data": { - "text/plain": [ - "' 2 + 2 = 4'" - ] - }, - "execution_count": 6, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "chain.run(\"whats 2 + 2\")" - ] - }, - { - "cell_type": "markdown", - "id": "14449679", - "metadata": {}, - "source": [ - "## Saving components separately\n", - "In the above example, we can see that the prompt and LLM configuration information is saved in the same JSON as the overall chain. Alternatively, we can split them up and save them separately. This is often useful to make the saved components more modular. In order to do this, we just need to specify `llm_path` instead of the `llm` component, and `prompt_path` instead of the `prompt` component." - ] - }, - { - "cell_type": "code", - "execution_count": 7, - "id": "50ec35ab", - "metadata": {}, - "outputs": [], - "source": [ - "llm_chain.prompt.save(\"prompt.json\")" - ] - }, - { - "cell_type": "code", - "execution_count": 8, - "id": "c48b39aa", - "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "{\r\n", - " \"input_variables\": [\r\n", - " \"question\"\r\n", - " ],\r\n", - " \"output_parser\": null,\r\n", - " \"template\": \"Question: {question}\\n\\nAnswer: Let's think step by step.\",\r\n", - " \"template_format\": \"f-string\"\r\n", - "}" - ] - } - ], - "source": [ - "!cat prompt.json" - ] - }, - { - "cell_type": "code", - "execution_count": 9, - "id": "13c92944", - "metadata": {}, - "outputs": [], - "source": [ - "llm_chain.llm.save(\"llm.json\")" - ] - }, - { - "cell_type": "code", - "execution_count": 10, - "id": "1b815f89", - "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "{\r\n", - " \"model_name\": \"text-davinci-003\",\r\n", - " \"temperature\": 0.0,\r\n", - " \"max_tokens\": 256,\r\n", - " \"top_p\": 1,\r\n", - " \"frequency_penalty\": 0,\r\n", - " \"presence_penalty\": 0,\r\n", - " \"n\": 1,\r\n", - " \"best_of\": 1,\r\n", - " \"request_timeout\": null,\r\n", - " \"logit_bias\": {},\r\n", - " \"_type\": \"openai\"\r\n", - "}" - ] - } - ], - "source": [ - "!cat llm.json" - ] - }, - { - "cell_type": "code", - "execution_count": 11, - "id": "7e6aa9ab", - "metadata": {}, - "outputs": [], - "source": [ - "config = {\n", - " \"memory\": None,\n", - " \"verbose\": True,\n", - " \"prompt_path\": \"prompt.json\",\n", - " \"llm_path\": \"llm.json\",\n", - " \"output_key\": \"text\",\n", - " \"_type\": \"llm_chain\",\n", - "}\n", - "import json\n", - "\n", - "with open(\"llm_chain_separate.json\", \"w\") as f:\n", - " json.dump(config, f, indent=2)" - ] - }, - { - "cell_type": "code", - "execution_count": 12, - "id": "8e959ca6", - "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "{\r\n", - " \"memory\": null,\r\n", - " \"verbose\": true,\r\n", - " \"prompt_path\": \"prompt.json\",\r\n", - " \"llm_path\": \"llm.json\",\r\n", - " \"output_key\": \"text\",\r\n", - " \"_type\": \"llm_chain\"\r\n", - "}" - ] - } - ], - "source": [ - "!cat llm_chain_separate.json" - ] - }, - { - "cell_type": "markdown", - "id": "662731c0", - "metadata": {}, - "source": [ - "We can then load it in the same way:" - ] - }, - { - "cell_type": "code", - "execution_count": 13, - "id": "d69ceb93", - "metadata": {}, - "outputs": [], - "source": [ - "chain = load_chain(\"llm_chain_separate.json\")" - ] - }, - { - "cell_type": "code", - "execution_count": 15, - "id": "a99d61b9", - "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "\n", - "\n", - "\u001b[1m> Entering new LLMChain chain...\u001b[0m\n", - "Prompt after formatting:\n", - "\u001b[32;1m\u001b[1;3mQuestion: whats 2 + 2\n", - "\n", - "Answer: Let's think step by step.\u001b[0m\n", - "\n", - "\u001b[1m> Finished chain.\u001b[0m\n" - ] - }, - { - "data": { - "text/plain": [ - "' 2 + 2 = 4'" - ] - }, - "execution_count": 15, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "chain.run(\"whats 2 + 2\")" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "id": "822b7c12", - "metadata": {}, - "outputs": [], - "source": [] - } - ], - "metadata": { - "kernelspec": { - "display_name": "Python 3 (ipykernel)", - "language": "python", - "name": "python3" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.11.3" - } - }, - "nbformat": 4, - "nbformat_minor": 5 -}