You cannot select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
langchain/docs/extras/use_cases/question_answering/how_to/local_retrieval_qa.ipynb

720 lines
27 KiB
Plaintext

This file contains ambiguous Unicode characters!

This file contains ambiguous Unicode characters that may be confused with others in your current locale. If your use case is intentional and legitimate, you can safely ignore this warning. Use the Escape button to highlight these characters.

{
"cells": [
{
"cell_type": "markdown",
"id": "3ea857b1",
"metadata": {},
"source": [
"# Use local LLMs\n",
"\n",
"The popularity of projects like [PrivateGPT](https://github.com/imartinez/privateGPT), [llama.cpp](https://github.com/ggerganov/llama.cpp), and [GPT4All](https://github.com/nomic-ai/gpt4all) underscore the importance of running LLMs locally.\n",
"\n",
"LangChain has [integrations](https://integrations.langchain.com/) with many open source LLMs that can be run locally.\n",
"\n",
"See [here](docs/guides/local_llms) for setup instructions for these LLMs. \n",
"\n",
"For example, here we show how to run `GPT4All` or `LLaMA2` locally (e.g., on your laptop) using local embeddings and a local LLM.\n",
"\n",
"## Document Loading \n",
"\n",
"First, install packages needed for local embeddings and vector storage."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a7dc1ec5",
"metadata": {},
"outputs": [],
"source": [
"pip install gpt4all chromadb langchainhub"
]
},
{
"cell_type": "markdown",
"id": "5e7543fa",
"metadata": {},
"source": [
"Load and split an example docucment.\n",
"\n",
"We'll use a blog post on agents as an example."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "f8cf5765",
"metadata": {},
"outputs": [],
"source": [
"from langchain.document_loaders import WebBaseLoader\n",
"\n",
"loader = WebBaseLoader(\"https://lilianweng.github.io/posts/2023-06-23-agent/\")\n",
"data = loader.load()\n",
"\n",
"from langchain.text_splitter import RecursiveCharacterTextSplitter\n",
"\n",
"text_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=0)\n",
"all_splits = text_splitter.split_documents(data)"
]
},
{
"cell_type": "markdown",
"id": "131d5059",
"metadata": {},
"source": [
"Next, the below steps will download the `GPT4All` embeddings locally (if you don't already have them)."
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "fdce8923",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Found model file at /Users/rlm/.cache/gpt4all/ggml-all-MiniLM-L6-v2-f16.bin\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"objc[49534]: Class GGMLMetalClass is implemented in both /Users/rlm/miniforge3/envs/llama2/lib/python3.9/site-packages/gpt4all/llmodel_DO_NOT_MODIFY/build/libreplit-mainline-metal.dylib (0x131614208) and /Users/rlm/miniforge3/envs/llama2/lib/python3.9/site-packages/gpt4all/llmodel_DO_NOT_MODIFY/build/libllamamodel-mainline-metal.dylib (0x131988208). One of the two will be used. Which one is undefined.\n"
]
}
],
"source": [
"from langchain.vectorstores import Chroma\n",
"from langchain.embeddings import GPT4AllEmbeddings\n",
"\n",
"vectorstore = Chroma.from_documents(documents=all_splits, embedding=GPT4AllEmbeddings())"
]
},
{
"cell_type": "markdown",
"id": "29137915",
"metadata": {},
"source": [
"Test similarity search is working with our local embeddings."
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "b0c55e98",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"4"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"question = \"What are the approaches to Task Decomposition?\"\n",
"docs = vectorstore.similarity_search(question)\n",
"len(docs)"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "32b43339",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Document(page_content='Task decomposition can be done (1) by LLM with simple prompting like \"Steps for XYZ.\\\\n1.\", \"What are the subgoals for achieving XYZ?\", (2) by using task-specific instructions; e.g. \"Write a story outline.\" for writing a novel, or (3) with human inputs.', metadata={'description': 'Building agents with LLM (large language model) as its core controller is a cool concept. Several proof-of-concepts demos, such as AutoGPT, GPT-Engineer and BabyAGI, serve as inspiring examples. The potentiality of LLM extends beyond generating well-written copies, stories, essays and programs; it can be framed as a powerful general problem solver.\\nAgent System Overview In a LLM-powered autonomous agent system, LLM functions as the agents brain, complemented by several key components:', 'language': 'en', 'source': 'https://lilianweng.github.io/posts/2023-06-23-agent/', 'title': \"LLM Powered Autonomous Agents | Lil'Log\"})"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"docs[0]"
]
},
{
"cell_type": "markdown",
"id": "557cd9b8",
"metadata": {},
"source": [
"## Model \n",
"\n",
"### LLaMA2\n",
"\n",
"Note: new versions of `llama-cpp-python` use GGUF model files (see [here](https://github.com/abetlen/llama-cpp-python/pull/633)).\n",
"\n",
"If you have an existing GGML model, see [here](docs/integrations/llms/llamacpp) for instructions for conversion for GGUF. \n",
" \n",
"And / or, you can download a GGUF converted model (e.g., [here](https://huggingface.co/TheBloke)).\n",
"\n",
"Finally, as noted in detail [here](docs/guides/local_llms) install `llama-cpp-python`"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "9f218576",
"metadata": {},
"outputs": [],
"source": [
"pip install llama-cpp-python"
]
},
{
"cell_type": "markdown",
"id": "0dd1804f",
"metadata": {},
"source": [
"To enable use of GPU on Apple Silicon, follow the steps [here](https://github.com/abetlen/llama-cpp-python/blob/main/docs/install/macos.md) to use the Python binding `with Metal support`.\n",
"\n",
"In particular, ensure that `conda` is using the correct virtual enviorment that you created (`miniforge3`).\n",
"\n",
"E.g., for me:\n",
"\n",
"```\n",
"conda activate /Users/rlm/miniforge3/envs/llama\n",
"```\n",
"\n",
"With this confirmed:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "5884779a-957e-4c4c-b447-bc8385edc67e",
"metadata": {},
"outputs": [],
"source": [
"! CMAKE_ARGS=\"-DLLAMA_METAL=on\" FORCE_CMAKE=1 /Users/rlm/miniforge3/envs/llama/bin/pip install -U llama-cpp-python --no-cache-dir"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "cd7164e3",
"metadata": {},
"outputs": [],
"source": [
"from langchain.llms import LlamaCpp\n",
"from langchain.callbacks.manager import CallbackManager\n",
"from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler"
]
},
{
"cell_type": "markdown",
"id": "fcf81052",
"metadata": {},
"source": [
"Setting model parameters as noted in the [llama.cpp docs](https://python.langchain.com/docs/integrations/llms/llamacpp)."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "af1176bb-d52a-4cf0-b983-8b7433d45b4f",
"metadata": {},
"outputs": [],
"source": [
"n_gpu_layers = 1 # Metal set to 1 is enough.\n",
"n_batch = 512 # Should be between 1 and n_ctx, consider the amount of RAM of your Apple Silicon Chip.\n",
"callback_manager = CallbackManager([StreamingStdOutCallbackHandler()])\n",
"\n",
"# Make sure the model path is correct for your system!\n",
"llm = LlamaCpp(\n",
" model_path=\"/Users/rlm/Desktop/Code/llama.cpp/models/llama-2-13b-chat.ggufv3.q4_0.bin\",\n",
" n_gpu_layers=n_gpu_layers,\n",
" n_batch=n_batch,\n",
" n_ctx=2048,\n",
" f16_kv=True, # MUST set to True, otherwise you will run into problem after a couple of calls\n",
" callback_manager=callback_manager,\n",
" verbose=True,\n",
")"
]
},
{
"cell_type": "markdown",
"id": "3831b16a",
"metadata": {},
"source": [
"Note that these indicate that [Metal was enabled properly](https://python.langchain.com/docs/integrations/llms/llamacpp):\n",
"\n",
"```\n",
"ggml_metal_init: allocating\n",
"ggml_metal_init: using MPS\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "bf0162e0-8c41-4344-88ae-ff2bbaeb12eb",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Llama.generate: prefix-match hit\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"by jonathan \n",
"\n",
"Here's the hypothetical rap battle:\n",
"\n",
"[Stephen Colbert]: Yo, this is Stephen Colbert, known for my comedy show. I'm here to put some sense in your mind, like an enema do-go. Your opponent? A man of laughter and witty quips, John Oliver! Now let's see who gets the most laughs while taking shots at each other\n",
"\n",
"[John Oliver]: Yo, this is John Oliver, known for my own comedy show. I'm here to take your mind on an adventure through wit and humor. But first, allow me to you to our contestant: Stephen Colbert! His show has been around since the '90s, but it's time to see who can out-rap whom\n",
"\n",
"[Stephen Colbert]: You claim to be a witty man, John Oliver, with your British charm and clever remarks. But my knows that I'm America's funnyman! Who's the one taking you? Nobody!\n",
"\n",
"[John Oliver]: Hey Stephen Colbert, don't get too cocky. You may"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\n",
"llama_print_timings: load time = 4481.74 ms\n",
"llama_print_timings: sample time = 183.05 ms / 256 runs ( 0.72 ms per token, 1398.53 tokens per second)\n",
"llama_print_timings: prompt eval time = 456.05 ms / 13 tokens ( 35.08 ms per token, 28.51 tokens per second)\n",
"llama_print_timings: eval time = 7375.20 ms / 255 runs ( 28.92 ms per token, 34.58 tokens per second)\n",
"llama_print_timings: total time = 8388.92 ms\n"
]
},
{
"data": {
"text/plain": [
"\"by jonathan \\n\\nHere's the hypothetical rap battle:\\n\\n[Stephen Colbert]: Yo, this is Stephen Colbert, known for my comedy show. I'm here to put some sense in your mind, like an enema do-go. Your opponent? A man of laughter and witty quips, John Oliver! Now let's see who gets the most laughs while taking shots at each other\\n\\n[John Oliver]: Yo, this is John Oliver, known for my own comedy show. I'm here to take your mind on an adventure through wit and humor. But first, allow me to you to our contestant: Stephen Colbert! His show has been around since the '90s, but it's time to see who can out-rap whom\\n\\n[Stephen Colbert]: You claim to be a witty man, John Oliver, with your British charm and clever remarks. But my knows that I'm America's funnyman! Who's the one taking you? Nobody!\\n\\n[John Oliver]: Hey Stephen Colbert, don't get too cocky. You may\""
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"llm(\"Simulate a rap battle between Stephen Colbert and John Oliver\")"
]
},
{
"cell_type": "markdown",
"id": "0d9579a7",
"metadata": {},
"source": [
"### GPT4All\n",
"\n",
"Similarly, we can use `GPT4All`.\n",
"\n",
"[Download the GPT4All model binary](https://python.langchain.com/docs/integrations/llms/gpt4all).\n",
"\n",
"The Model Explorer on the [GPT4All](https://gpt4all.io/index.html) is a great way to choose and download a model.\n",
"\n",
"Then, specify the path that you downloaded to to.\n",
"\n",
"E.g., for me, the model lives here:\n",
"\n",
"`/Users/rlm/Desktop/Code/gpt4all/models/nous-hermes-13b.ggmlv3.q4_0.bin`"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "57c1aec0-04c7-479e-b9bf-af3c547ba0a3",
"metadata": {},
"outputs": [],
"source": [
"from langchain.llms import GPT4All\n",
"\n",
"llm = GPT4All(\n",
" model=\"/Users/rlm/Desktop/Code/gpt4all/models/nous-hermes-13b.ggmlv3.q4_0.bin\",\n",
" max_tokens=2048,\n",
")"
]
},
{
"cell_type": "markdown",
"id": "d58838ae",
"metadata": {},
"source": [
"## LLMChain\n",
"\n",
"Run an `LLMChain` (see [here](https://python.langchain.com/docs/modules/chains/foundational/llm_chain)) with either model by passing in the retrieved docs and a simple prompt.\n",
"\n",
"It formats the prompt template using the input key values provided and passes the formatted string to `GPT4All`, `LLama-V2`, or another specified LLM.\n",
" \n",
"In this case, the list of retrieved documents (`docs`) above are pass into `{context}`."
]
},
{
"cell_type": "code",
"execution_count": 27,
"id": "18a3716d",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Llama.generate: prefix-match hit\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"Based on the retrieved documents, the main themes are:\n",
"1. Task decomposition: The ability to break down complex tasks into smaller subtasks, which can be handled by an LLM or other components of the agent system.\n",
"2. LLM as the core controller: The use of a large language model (LLM) as the primary controller of an autonomous agent system, complemented by other key components such as a knowledge graph and a planner.\n",
"3. Potentiality of LLM: The idea that LLMs have the potential to be used as powerful general problem solvers, not just for generating well-written copies but also for solving complex tasks and achieving human-like intelligence.\n",
"4. Challenges in long-term planning: The challenges in planning over a lengthy history and effectively exploring the solution space, which are important limitations of current LLM-based autonomous agent systems."
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\n",
"llama_print_timings: load time = 1191.88 ms\n",
"llama_print_timings: sample time = 134.47 ms / 193 runs ( 0.70 ms per token, 1435.25 tokens per second)\n",
"llama_print_timings: prompt eval time = 39470.18 ms / 1055 tokens ( 37.41 ms per token, 26.73 tokens per second)\n",
"llama_print_timings: eval time = 8090.85 ms / 192 runs ( 42.14 ms per token, 23.73 tokens per second)\n",
"llama_print_timings: total time = 47943.12 ms\n"
]
},
{
"data": {
"text/plain": [
"'\\nBased on the retrieved documents, the main themes are:\\n1. Task decomposition: The ability to break down complex tasks into smaller subtasks, which can be handled by an LLM or other components of the agent system.\\n2. LLM as the core controller: The use of a large language model (LLM) as the primary controller of an autonomous agent system, complemented by other key components such as a knowledge graph and a planner.\\n3. Potentiality of LLM: The idea that LLMs have the potential to be used as powerful general problem solvers, not just for generating well-written copies but also for solving complex tasks and achieving human-like intelligence.\\n4. Challenges in long-term planning: The challenges in planning over a lengthy history and effectively exploring the solution space, which are important limitations of current LLM-based autonomous agent systems.'"
]
},
"execution_count": 27,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain.prompts import PromptTemplate\nfrom langchain.chains import LLMChain\n",
"\n",
"# Prompt\n",
"prompt = PromptTemplate.from_template(\n",
" \"Summarize the main themes in these retrieved docs: {docs}\"\n",
")\n",
"\n",
"# Chain\n",
"llm_chain = LLMChain(llm=llm, prompt=prompt)\n",
"\n",
"# Run\n",
"question = \"What are the approaches to Task Decomposition?\"\n",
"docs = vectorstore.similarity_search(question)\n",
"result = llm_chain(docs)\n",
"\n",
"# Output\n",
"result[\"text\"]"
]
},
{
"cell_type": "markdown",
"id": "ed9cecf8",
"metadata": {},
"source": [
"## QA Chain\n",
"\n",
"We can use a `QA chain` to handle our question above.\n",
"\n",
"`chain_type=\"stuff\"` (see [here](https://python.langchain.com/docs/modules/chains/document/stuff)) means that all the docs will be added (stuffed) into a prompt."
]
},
{
"cell_type": "markdown",
"id": "3cce6977-52e7-4944-89b4-c161d04f6698",
"metadata": {},
"source": [
"We can also use the LangChain Prompt Hub to store and fetch prompts that are model-specific.\n",
"\n",
"This will work with your [LangSmith API key](https://docs.smith.langchain.com/).\n",
"\n",
"Let's try with a default RAG prompt, [here](https://smith.langchain.com/hub/rlm/rag-prompt)."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "cc638992-0924-41c0-8dae-8cf683e72b16",
"metadata": {},
"outputs": [],
"source": [
"pip install langchainhub"
]
},
{
"cell_type": "code",
"execution_count": 28,
"id": "c01c1725",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Llama.generate: prefix-match hit\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"Task can be done by down a task into smaller subtasks, using simple prompting like \"Steps for XYZ.\" or task-specific like \"Write a story outline\" for writing a novel."
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\n",
"llama_print_timings: load time = 11326.20 ms\n",
"llama_print_timings: sample time = 33.03 ms / 47 runs ( 0.70 ms per token, 1422.86 tokens per second)\n",
"llama_print_timings: prompt eval time = 1387.31 ms / 242 tokens ( 5.73 ms per token, 174.44 tokens per second)\n",
"llama_print_timings: eval time = 1321.62 ms / 46 runs ( 28.73 ms per token, 34.81 tokens per second)\n",
"llama_print_timings: total time = 2801.08 ms\n"
]
},
{
"data": {
"text/plain": [
"{'output_text': '\\nTask can be done by down a task into smaller subtasks, using simple prompting like \"Steps for XYZ.\" or task-specific like \"Write a story outline\" for writing a novel.'}"
]
},
"execution_count": 28,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Prompt \n",
"from langchain import hub\n",
"rag_prompt = hub.pull(\"rlm/rag-prompt\")\n",
"from langchain.chains.question_answering import load_qa_chain\n",
"# Chain\n",
"chain = load_qa_chain(llm, chain_type=\"stuff\", prompt=rag_prompt)\n",
"# Run\n",
"chain({\"input_documents\": docs, \"question\": question}, return_only_outputs=True)"
]
},
{
"cell_type": "markdown",
"id": "2e5913f0-cf92-4e21-8794-0502ba11b202",
"metadata": {},
"source": [
"Now, let's try with [a prompt specifically for LLaMA](https://smith.langchain.com/hub/rlm/rag-prompt-llama), which [includes special tokens](https://huggingface.co/blog/llama2#how-to-prompt-llama-2)."
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "78f6862d-b7a6-4e03-84e4-45667185bf9b",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"ChatPromptTemplate(input_variables=['question', 'context'], output_parser=None, partial_variables={}, messages=[HumanMessagePromptTemplate(prompt=PromptTemplate(input_variables=['question', 'context'], output_parser=None, partial_variables={}, template=\"[INST]<<SYS>> You are an assistant for question-answering tasks. Use the following pieces of retrieved context to answer the question. If you don't know the answer, just say that you don't know. Use three sentences maximum and keep the answer concise.<</SYS>> \\nQuestion: {question} \\nContext: {context} \\nAnswer: [/INST]\", template_format='f-string', validate_template=True), additional_kwargs={})])"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Prompt\n",
"rag_prompt_llama = hub.pull(\"rlm/rag-prompt-llama\")\n",
"rag_prompt_llama"
]
},
{
"cell_type": "code",
"execution_count": 26,
"id": "67cefb46-acd3-4c2a-a8f6-b62c7c3e30dc",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Llama.generate: prefix-match hit\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
" Sure, I'd be happy to help! Based on the context, here are some to task:\n",
"\n",
"1. LLM with simple prompting: This using a large model (LLM) with simple prompts like \"Steps for XYZ\" or \"What are the subgoals for achieving XYZ?\" to decompose tasks into smaller steps.\n",
"2. Task-specific: Another is to use task-specific, such as \"Write a story outline\" for writing a novel, to guide the of tasks.\n",
"3. Human inputs:, human inputs can be used to supplement the process, in cases where the task a high degree of creativity or expertise.\n",
"\n",
"As fores in long-term and task, one major is that LLMs to adjust plans when faced with errors, making them less robust to humans who learn from trial and error."
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\n",
"llama_print_timings: load time = 11326.20 ms\n",
"llama_print_timings: sample time = 144.81 ms / 207 runs ( 0.70 ms per token, 1429.47 tokens per second)\n",
"llama_print_timings: prompt eval time = 1506.13 ms / 258 tokens ( 5.84 ms per token, 171.30 tokens per second)\n",
"llama_print_timings: eval time = 6231.92 ms / 206 runs ( 30.25 ms per token, 33.06 tokens per second)\n",
"llama_print_timings: total time = 8158.41 ms\n"
]
},
{
"data": {
"text/plain": [
"{'output_text': ' Sure, I\\'d be happy to help! Based on the context, here are some to task:\\n\\n1. LLM with simple prompting: This using a large model (LLM) with simple prompts like \"Steps for XYZ\" or \"What are the subgoals for achieving XYZ?\" to decompose tasks into smaller steps.\\n2. Task-specific: Another is to use task-specific, such as \"Write a story outline\" for writing a novel, to guide the of tasks.\\n3. Human inputs:, human inputs can be used to supplement the process, in cases where the task a high degree of creativity or expertise.\\n\\nAs fores in long-term and task, one major is that LLMs to adjust plans when faced with errors, making them less robust to humans who learn from trial and error.'}"
]
},
"execution_count": 26,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Chain\n",
"chain = load_qa_chain(llm, chain_type=\"stuff\", prompt=rag_prompt_llama)\n",
"# Run\n",
"chain({\"input_documents\": docs, \"question\": question}, return_only_outputs=True)"
]
},
{
"cell_type": "markdown",
"id": "821729cb",
"metadata": {},
"source": [
"## RetrievalQA\n",
"\n",
"For an even simpler flow, use `RetrievalQA`.\n",
"\n",
"This will use a QA default prompt (shown [here](https://github.com/langchain-ai/langchain/blob/275b926cf745b5668d3ea30236635e20e7866442/langchain/chains/retrieval_qa/prompt.py#L4)) and will retrieve from the vectorDB.\n",
"\n",
"But, you can still pass in a prompt, as before, if desired."
]
},
{
"cell_type": "code",
"execution_count": 29,
"id": "86c7a349",
"metadata": {},
"outputs": [],
"source": [
"from langchain.chains import RetrievalQA\n",
"\n",
"qa_chain = RetrievalQA.from_chain_type(\n",
" llm,\n",
" retriever=vectorstore.as_retriever(),\n",
" chain_type_kwargs={\"prompt\": rag_prompt_llama},\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 30,
"id": "112ca227",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Llama.generate: prefix-match hit\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
" Sure! Based on the context, here's my answer to your:\n",
"\n",
"There are several to task,:\n",
"\n",
"1. LLM-based with simple prompting, such as \"Steps for XYZ\" or \"What are the subgoals for achieving XYZ?\"\n",
"2. Task-specific, like \"Write a story outline\" for writing a novel.\n",
"3. Human inputs to guide the process.\n",
"\n",
"These can be used to decompose complex tasks into smaller, more manageable subtasks, which can help improve the and effectiveness of task. However, long-term and task can being due to the need to plan over a lengthy history and explore the space., LLMs may to adjust plans when faced with errors, making them less robust to human learners who can learn from trial and error."
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\n",
"llama_print_timings: load time = 11326.20 ms\n",
"llama_print_timings: sample time = 139.20 ms / 200 runs ( 0.70 ms per token, 1436.76 tokens per second)\n",
"llama_print_timings: prompt eval time = 1532.26 ms / 258 tokens ( 5.94 ms per token, 168.38 tokens per second)\n",
"llama_print_timings: eval time = 5977.62 ms / 199 runs ( 30.04 ms per token, 33.29 tokens per second)\n",
"llama_print_timings: total time = 7916.21 ms\n"
]
},
{
"data": {
"text/plain": [
"{'query': 'What are the approaches to Task Decomposition?',\n",
" 'result': ' Sure! Based on the context, here\\'s my answer to your:\\n\\nThere are several to task,:\\n\\n1. LLM-based with simple prompting, such as \"Steps for XYZ\" or \"What are the subgoals for achieving XYZ?\"\\n2. Task-specific, like \"Write a story outline\" for writing a novel.\\n3. Human inputs to guide the process.\\n\\nThese can be used to decompose complex tasks into smaller, more manageable subtasks, which can help improve the and effectiveness of task. However, long-term and task can being due to the need to plan over a lengthy history and explore the space., LLMs may to adjust plans when faced with errors, making them less robust to human learners who can learn from trial and error.'}"
]
},
"execution_count": 30,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"qa_chain({\"query\": question})"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.16"
}
},
"nbformat": 4,
"nbformat_minor": 5
}