{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Different call methods\n", "\n", "All classes inherited from `Chain` offer a few ways of running chain logic. The most direct one is by using `__call__`:" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'adjective': 'corny',\n", " 'text': 'Why did the tomato turn red? Because it saw the salad dressing!'}" ] }, "execution_count": 5, "metadata": {}, "output_type": "execute_result" } ], "source": [ "chat = ChatOpenAI(temperature=0)\n", "prompt_template = \"Tell me a {adjective} joke\"\n", "llm_chain = LLMChain(llm=chat, prompt=PromptTemplate.from_template(prompt_template))\n", "\n", "llm_chain(inputs={\"adjective\": \"corny\"})" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "By default, `__call__` returns both the input and output key values. You can configure it to only return output key values by setting `return_only_outputs` to `True`." ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'text': 'Why did the tomato turn red? Because it saw the salad dressing!'}" ] }, "execution_count": 6, "metadata": {}, "output_type": "execute_result" } ], "source": [ "llm_chain(\"corny\", return_only_outputs=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If the `Chain` only outputs one output key (i.e. only has one element in its `output_keys`), you can use `run` method. Note that `run` outputs a string instead of a dictionary." ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "['text']" ] }, "execution_count": 7, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# llm_chain only has one output key, so we can use run\n", "llm_chain.output_keys" ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "'Why did the tomato turn red? Because it saw the salad dressing!'" ] }, "execution_count": 8, "metadata": {}, "output_type": "execute_result" } ], "source": [ "llm_chain.run({\"adjective\": \"corny\"})" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In the case of one input key, you can input the string directly without specifying the input mapping." ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'adjective': 'corny',\n", " 'text': 'Why did the tomato turn red? Because it saw the salad dressing!'}" ] }, "execution_count": 9, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# These two are equivalent\n", "llm_chain.run({\"adjective\": \"corny\"})\n", "llm_chain.run(\"corny\")\n", "\n", "# These two are also equivalent\n", "llm_chain(\"corny\")\n", "llm_chain({\"adjective\": \"corny\"})" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Tips: You can easily integrate a `Chain` object as a `Tool` in your `Agent` via its `run` method. See an example [here](/docs/modules/agents/tools/how_to/custom_tools.html)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.11.3" }, "vscode": { "interpreter": { "hash": "b1677b440931f40d89ef8be7bf03acb108ce003de0ac9b18e8d43753ea2e7103" } } }, "nbformat": 4, "nbformat_minor": 4 }