langchain/docs/modules/models/llms/integrations/replicate.ipynb

405 lines
1.6 MiB
Plaintext
Raw Normal View History

{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Replicate\n",
"\n",
">[Replicate](https://replicate.com/blog/machine-learning-needs-better-tools) runs machine learning models in the cloud. We have a library of open-source models that you can run with a few lines of code. If you're building your own machine learning models, Replicate makes it easy to deploy them at scale.\n",
"\n",
"This example goes over how to use LangChain to interact with `Replicate` [models](https://replicate.com/explore)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
2023-04-04 19:15:03 +00:00
"## Setup"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To run this notebook, you'll need to create a [replicate](https://replicate.com) account and install the [replicate python client](https://github.com/replicate/replicate-python)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"!pip install replicate"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {
"tags": []
},
"outputs": [
{
Fix Replicate llm response to handle iterator / multiple outputs (#3614) One of our users noticed a bug when calling streaming models. This is because those models return an iterator. So, I've updated the Replicate `_call` code to join together the output. The other advantage of this fix is that if you requested multiple outputs you would get them all – previously I was just returning output[0]. I also adjusted the demo docs to use dolly, because we're featuring that model right now and it's always hot, so people won't have to wait for the model to boot up. The error that this fixes: ``` > llm = Replicate(model=“replicate/flan-t5-xl:eec2f71c986dfa3b7a5d842d22e1130550f015720966bec48beaae059b19ef4c”) > llm(“hello”) > Traceback (most recent call last): File "/Users/charlieholtz/workspace/dev/python/main.py", line 15, in <module> print(llm(prompt)) File "/opt/homebrew/lib/python3.10/site-packages/langchain/llms/base.py", line 246, in __call__ return self.generate([prompt], stop=stop).generations[0][0].text File "/opt/homebrew/lib/python3.10/site-packages/langchain/llms/base.py", line 140, in generate raise e File "/opt/homebrew/lib/python3.10/site-packages/langchain/llms/base.py", line 137, in generate output = self._generate(prompts, stop=stop) File "/opt/homebrew/lib/python3.10/site-packages/langchain/llms/base.py", line 324, in _generate text = self._call(prompt, stop=stop) File "/opt/homebrew/lib/python3.10/site-packages/langchain/llms/replicate.py", line 108, in _call return outputs[0] TypeError: 'generator' object is not subscriptable ```
2023-04-26 21:26:33 +00:00
"name": "stdout",
"output_type": "stream",
"text": [
" ········\n"
]
}
],
"source": [
"# get a token: https://replicate.com/account\n",
"\n",
"from getpass import getpass\n",
"\n",
"REPLICATE_API_TOKEN = getpass()"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"import os\n",
"\n",
"os.environ[\"REPLICATE_API_TOKEN\"] = REPLICATE_API_TOKEN"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain.llms import Replicate\n",
"from langchain import PromptTemplate, LLMChain"
]
},
{
Fix Replicate llm response to handle iterator / multiple outputs (#3614) One of our users noticed a bug when calling streaming models. This is because those models return an iterator. So, I've updated the Replicate `_call` code to join together the output. The other advantage of this fix is that if you requested multiple outputs you would get them all – previously I was just returning output[0]. I also adjusted the demo docs to use dolly, because we're featuring that model right now and it's always hot, so people won't have to wait for the model to boot up. The error that this fixes: ``` > llm = Replicate(model=“replicate/flan-t5-xl:eec2f71c986dfa3b7a5d842d22e1130550f015720966bec48beaae059b19ef4c”) > llm(“hello”) > Traceback (most recent call last): File "/Users/charlieholtz/workspace/dev/python/main.py", line 15, in <module> print(llm(prompt)) File "/opt/homebrew/lib/python3.10/site-packages/langchain/llms/base.py", line 246, in __call__ return self.generate([prompt], stop=stop).generations[0][0].text File "/opt/homebrew/lib/python3.10/site-packages/langchain/llms/base.py", line 140, in generate raise e File "/opt/homebrew/lib/python3.10/site-packages/langchain/llms/base.py", line 137, in generate output = self._generate(prompts, stop=stop) File "/opt/homebrew/lib/python3.10/site-packages/langchain/llms/base.py", line 324, in _generate text = self._call(prompt, stop=stop) File "/opt/homebrew/lib/python3.10/site-packages/langchain/llms/replicate.py", line 108, in _call return outputs[0] TypeError: 'generator' object is not subscriptable ```
2023-04-26 21:26:33 +00:00
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
2023-04-04 19:15:03 +00:00
"## Calling a model\n",
"\n",
"Find a model on the [replicate explore page](https://replicate.com/explore), and then paste in the model name and version in this format: model_name/version\n",
"\n",
Fix Replicate llm response to handle iterator / multiple outputs (#3614) One of our users noticed a bug when calling streaming models. This is because those models return an iterator. So, I've updated the Replicate `_call` code to join together the output. The other advantage of this fix is that if you requested multiple outputs you would get them all – previously I was just returning output[0]. I also adjusted the demo docs to use dolly, because we're featuring that model right now and it's always hot, so people won't have to wait for the model to boot up. The error that this fixes: ``` > llm = Replicate(model=“replicate/flan-t5-xl:eec2f71c986dfa3b7a5d842d22e1130550f015720966bec48beaae059b19ef4c”) > llm(“hello”) > Traceback (most recent call last): File "/Users/charlieholtz/workspace/dev/python/main.py", line 15, in <module> print(llm(prompt)) File "/opt/homebrew/lib/python3.10/site-packages/langchain/llms/base.py", line 246, in __call__ return self.generate([prompt], stop=stop).generations[0][0].text File "/opt/homebrew/lib/python3.10/site-packages/langchain/llms/base.py", line 140, in generate raise e File "/opt/homebrew/lib/python3.10/site-packages/langchain/llms/base.py", line 137, in generate output = self._generate(prompts, stop=stop) File "/opt/homebrew/lib/python3.10/site-packages/langchain/llms/base.py", line 324, in _generate text = self._call(prompt, stop=stop) File "/opt/homebrew/lib/python3.10/site-packages/langchain/llms/replicate.py", line 108, in _call return outputs[0] TypeError: 'generator' object is not subscriptable ```
2023-04-26 21:26:33 +00:00
"For example, for this [dolly model](https://replicate.com/replicate/dolly-v2-12b), click on the API tab. The model name/version would be: `replicate/dolly-v2-12b:ef0e1aefc61f8e096ebe4db6b2bacc297daf2ef6899f0f7e001ec445893500e5`\n",
"\n",
"Only the `model` param is required, but we can add other model params when initializing.\n",
"\n",
"For example, if we were running stable diffusion and wanted to change the image dimensions:\n",
"\n",
"```\n",
"Replicate(model=\"stability-ai/stable-diffusion:db21e45d3f7023abc2a46ee38a23973f6dce16bb082a930b0c49861f96d1e5bf\", input={'image_dimensions': '512x512'})\n",
"```\n",
" \n",
"*Note that only the first output of a model will be returned.*"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {
"tags": []
},
"outputs": [],
"source": [
Fix Replicate llm response to handle iterator / multiple outputs (#3614) One of our users noticed a bug when calling streaming models. This is because those models return an iterator. So, I've updated the Replicate `_call` code to join together the output. The other advantage of this fix is that if you requested multiple outputs you would get them all – previously I was just returning output[0]. I also adjusted the demo docs to use dolly, because we're featuring that model right now and it's always hot, so people won't have to wait for the model to boot up. The error that this fixes: ``` > llm = Replicate(model=“replicate/flan-t5-xl:eec2f71c986dfa3b7a5d842d22e1130550f015720966bec48beaae059b19ef4c”) > llm(“hello”) > Traceback (most recent call last): File "/Users/charlieholtz/workspace/dev/python/main.py", line 15, in <module> print(llm(prompt)) File "/opt/homebrew/lib/python3.10/site-packages/langchain/llms/base.py", line 246, in __call__ return self.generate([prompt], stop=stop).generations[0][0].text File "/opt/homebrew/lib/python3.10/site-packages/langchain/llms/base.py", line 140, in generate raise e File "/opt/homebrew/lib/python3.10/site-packages/langchain/llms/base.py", line 137, in generate output = self._generate(prompts, stop=stop) File "/opt/homebrew/lib/python3.10/site-packages/langchain/llms/base.py", line 324, in _generate text = self._call(prompt, stop=stop) File "/opt/homebrew/lib/python3.10/site-packages/langchain/llms/replicate.py", line 108, in _call return outputs[0] TypeError: 'generator' object is not subscriptable ```
2023-04-26 21:26:33 +00:00
"llm = Replicate(model=\"replicate/dolly-v2-12b:ef0e1aefc61f8e096ebe4db6b2bacc297daf2ef6899f0f7e001ec445893500e5\")"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'The legal driving age of dogs is 2. Cars are designed for humans to drive. Therefore, the final answer is yes.'"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"prompt = \"\"\"\n",
"Answer the following yes/no question by reasoning step by step. \n",
"Can a dog drive a car?\n",
"\"\"\"\n",
"llm(prompt)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can call any replicate model using this syntax. For example, we can call stable diffusion."
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [],
"source": [
"text2image = Replicate(model=\"stability-ai/stable-diffusion:db21e45d3f7023abc2a46ee38a23973f6dce16bb082a930b0c49861f96d1e5bf\", \n",
" input={'image_dimensions': '512x512'})"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'https://replicate.delivery/pbxt/Cf07B1zqzFQLOSBQcKG7m9beE74wf7kuip5W9VxHJFembefKE/out-0.png'"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"image_output = text2image(\"A cat riding a motorcycle by Picasso\")\n",
"image_output"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The model spits out a URL. Let's render it."
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [
{
"data": {
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAgAAAAIACAIAAAB7GkOtAAEAAElEQVR4nET9ybJsWXKmiWmzut2Z2Tm38y4CgUAmslLAkmRWcUBhccDH5AtwyufgjEKWlFQCSAQQ4eHutzuNme1mdarKwYUI53uwRVarv37/v/D//v/4f/r03evWX16/aC1yXKHsVF4sfy1PP7ty1fseUK1m7u3hsiRCqOUUaPI0IgWABMYijMoEoGrguwCAACkDgFE/KiIZWIrkGEGtFRCF8TIf9xdV0Y6lAwGa9uiYkbtqjAEdA2mTlo+Wd2E/HAKl9+v1lYkoBvMsJsvDiEjbfQ2GwXkGDAxD9KdTEpDgHDhUkSOX2u1+zeu6jSlcLmMKDowcEYJKr2lIiCZi+/0wBDAEIgaNMRBZr8U5x87XZuQYTZZl3LbCiNuW0blxSLU1g15b857VsDdw0bGjY9trQ8c4LUPOhZw7cj6OOp0Se8eMSioKVSRMwz0XJW7S0ilS8uiA0LwnHwERBbqiLucofTuOpjQVm57uepXL044S32Wedjh1vnR/ouHCaXRhkE73+4qIReS2brV3l2bpdRonFS05M5IhsAutCXv38ObxfnttvRNA8MHMWjmIY2ltmMY0jC3v0zy23vOxxhSmMcUYiLA1LaXVBk+fvwzT2KVr7yFEDsPL06fvvvtwf76dLrOqtga3+905LqU774gBkObT8PD+7ZdPn457W5Z5vkzrfQ3Oo6Pe9baWz79+VeHayum8HMceYsq5l3JnR8GjSlu3p3L/dPv1H/v6qV+vy0z/l//pw9//MPzhA55dXnyjXsbJUErySNaThxSc8yxF4+Di6MueyZAJRcQFZkdtLy7FfnQ/eARBRgRF9lI7GAAjE6JRa+qGiADkEEDakbUJqEGIPrm8Hmbk0sA89NJqOdJgNWc/j1Kt1T7OQ6+5965N2TsKgQgBoefKgThozQbkOIUuAGhZgk2XP3+J//gl/9PX/PmAp5vsu/38p98qxBrOf/xf/pcf/+4fTmk6v33grg4pjeN+u6bgay3Oub3sey0hhJJXQFDrIXJXXdesBt//8B5rcQxg+vzl6kIKzjG49bafT6de+ulyiT4FH8Z5WW/ren+5nE/S6raXnA8iIwBHvpe6bvf92NNpHk9vQxzTNLRajCjnQ7F3g/0oxOB8QKRAbphiiHT98kIBc7MvX57TMNTSXXDjMFjT3kxaH6bx9nTfbkdIYX299bVEH8Dsn/+//x/Ix8d/+e/r02fqR+9FpcUxYteH9/P/7f/6f/yv/8Pv/+HvTn79GOunQV8SZO8sMOpRlsdRVaVWR77lbobMoNIBjdlxiO0QRBuXcX+9qUBM3FsHBDWTJt45MARPIoII9cjj4znf63ieey55XQFteXvSzi2rS76saz+O8Ty10oE8BSe1IhhFBtVWa5pja71WCSObNVMjcuBcb0gAiIZs0hpyyFt1c2JSMFEXwMfXMv6vP29/KtNt/tDjW2Y/nZZjK9orm39/enj+029uzTwN3g2Uf2sOLR86ubCvOXRr69G34xzdQ0xZWvBuYj4PIfc2GuG9pjFg7U77KTkiK88rM4ZpBOaj9rpmBAkhnIdBmpnp8bIrQowc2DWxM4Fzlp/3GGN34IhBLDgkgq7mSZvIPJ+37XXvjQfmkQS4gy8z5KOYw+7oaAplBwPeyji6OfqYApg6tFPA4xDtNQxpXTfbdzR98DQOkBJPjGwiuTjvxillQb3nNA2qoqj9qOx58PG4lzGwQ+hqVuoYfGdAB/Ww9uU2LynEiEdBBL2vksv5cdxawQZxCFkVjyoqyfQyRgQv28aMZDpEoGmsrfdSyBM4VGQRDSazs3U9ppFZS/IELGD65t0jQm61uzHUujJnhCOdqCq//LIty/tPv93G+N1hSc1//PVleDMv7xcB//W3l/niSu+3lxXZx2VgNxC01vtxWxFoXibpXnvvouMybNsLKX75/HQcx7wMran29ubt47XDum8uhPv1qNWs95Ss115y3+4bv39L5Ld1ny+XJtJrHedlux+l5WFI63o8vh2iH4+jInsKybqtz8+9235fH948msHtdg9D0kaaYV/VUXz/3ftcc83WsJP3x5arQN5kmgPS8PXry9sf3t3X9b6vve9OIBd7+vVXBrt9/ud++3lweTnZ3/7+9Mc/vnsI69u3YF+frbfpHAK3VnYA0Nz8JTqQwYfWm9zvrXrvQBXYeUdqfSMElsoa/eA5dmu9rIUCOx9RhT0hQ7ntIhamxFBraX3XGL3VUteSlsGRgCHpzj6otbqtaR5NaztW9gCtWQcHULcNVEOkJormvAdTERHoOUwDoqqVnNW672J+Sg7CME3bv72WzNfrbunckD6/vMTHy/F1nx6XEFNkF33wTLev63KZa6tVDLX7FK7P1wbmYny533o55nkk4t4kq6zH7nz4/PTkEcgsjckvSxomqb0e5tMk6pz3hN7AlaruqEg4jPNtP+ZppGK13H10rfewTJdhPnJtffMKPsbgw3I6ffn0qbS6l0zOhSmxSCuta3XeA2B+WZ1DETtP5+1292k4cq25Rhh8MO2aSzXD9etVBIbzkvfy5fN1oHA+nb785WeHUbEP07x++kiAyac912U6r69Pkd3DaXqYo5R1//Tru7nsX3+b34YhEqF1Kbr1vO3DFECZAdlzr4VaZUcA3gMiS7nl0jKpWBdMkbWqNc/kzPpugDwt8347ujTsnc1bKf1AbcfgVFUs33pG9gFaiZY9t35/lWJhSd5CrYdKIwutdjMzLyDNIYzDWHNtW+bBMwVT7RXiyMPgt3xXdY7MgWxf15gonmezXo+j3nOGgOzVOVA6P77r8uXp68Yg76ZwB3CKbt22YRyXcc77dRkHy9sQxr4/BxfJBZW2rreItIwJQcueI9NxX2dyntz7t2+vn36BbpEozhMigpGnoXUpuaNnE/Z+ROllP2JITABk3idivX19FekcBp/SNEST0vZy3LYwpGEYRUWqrLfVFIZxMgBTGKeo1gN6Z+KnSDE0QHIGYnUpVvs4xjjGVipW2V9uwzwLeSmV1VhNW18e5rePS6t9CF5FG3YyA9NhGMWJllZrc4puiCLacnPOtb0JGhEYWDmqH2LwA0puptqsaAkpMNB9zZF4/XIPKdRWGrSYBlU7Xo/gXM3qnYpqbwYIHFwXFeBcq2UB7xqpG8LregBDOrnxcVQWN9mbd29u99fxHO8vLxyIgzjo23o9vRnfvn93fZH0sj7dbz5O93L9sleYv1vOFyNY5vF1tzTPihbDMEw9l7pdt96lIZKnDz/9dH+53l7urXUm18x4z3GaCLG0Ms0jErHn49h/+/oFjTiEYRg3PUQ0eM6lmQE759gduaSUapMvX57GYXAxKCFxjhSB/eV0cj6Sy9tW0GBfy3HPt5d1nFOMw7YfKQzDOCMSGPz6y6f1vqcQvn553m77fc3jlC5pfN1v+3acHxYRaaWz89t23O/rvESP7v70aX/+GMt6vPxG+XNwdVrw7//DT//xbx/OkzxMbvDNBohmMSKK9d4i8/nNyI6QCFWZkQKnKQFAFyHmVmsrefTDcDl1UQKqR2NCTknUclU1jCmyQzc5BkAzEwJj7XJoY/Y0krrYzdqW0zQZMRkxY6ub1KzawTkiDCP33NvR/BiRiKOBYt4KIBAzhCQQ0BTYvMe6F3AcQjDxT5+u0zBpz2DmXYyTTadJ0LePtxEdgxGAStueb6fHJW+5lFxrbULn8zSf5irtqMUR+CkpdlMdp8ET+BSPfa/taABd+gDw+PDo4/T6/FlKd0TjtATv3n738PGvTyrmoz8vy/V+HT1Jt5fri4+emPZtPy3w8vJSpebW318u0fnW2+ePn8BhDFFMgSHvu6opgKnOQwzeX19el2ExiE/PV0UwQgULU3SOmwgAPLw5H+uxd9EuYGIij28v9Xrk9e6jZ+9JA7m4PL6rt6/ObJ5Px7aHEE7Lqdfay7aM8/hurs9Pc4hmMJ4XqKX0nvMekhPVEIP3tL6u1mVaRmISAzNBsDB4F4KqUhepkvdMAZP3GInRDKDstbemphi5lsaRjm0tR354mBF
"text/plain": [
"<PIL.PngImagePlugin.PngImageFile image mode=RGB size=512x512 at 0x12394BB20>"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from PIL import Image\n",
"import requests\n",
"from io import BytesIO\n",
"\n",
"response = requests.get(image_output)\n",
"img = Image.open(BytesIO(response.content))\n",
"\n",
"img"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
2023-04-04 19:15:03 +00:00
"## Chaining Calls\n",
"The whole point of langchain is to... chain! Here's an example of how do that."
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [],
"source": [
"from langchain.chains import SimpleSequentialChain"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"First, let's define the LLM for this model as a flan-5, and text2image as a stable diffusion model."
]
},
{
"cell_type": "code",
"execution_count": 21,
"metadata": {},
"outputs": [],
"source": [
Fix Replicate llm response to handle iterator / multiple outputs (#3614) One of our users noticed a bug when calling streaming models. This is because those models return an iterator. So, I've updated the Replicate `_call` code to join together the output. The other advantage of this fix is that if you requested multiple outputs you would get them all – previously I was just returning output[0]. I also adjusted the demo docs to use dolly, because we're featuring that model right now and it's always hot, so people won't have to wait for the model to boot up. The error that this fixes: ``` > llm = Replicate(model=“replicate/flan-t5-xl:eec2f71c986dfa3b7a5d842d22e1130550f015720966bec48beaae059b19ef4c”) > llm(“hello”) > Traceback (most recent call last): File "/Users/charlieholtz/workspace/dev/python/main.py", line 15, in <module> print(llm(prompt)) File "/opt/homebrew/lib/python3.10/site-packages/langchain/llms/base.py", line 246, in __call__ return self.generate([prompt], stop=stop).generations[0][0].text File "/opt/homebrew/lib/python3.10/site-packages/langchain/llms/base.py", line 140, in generate raise e File "/opt/homebrew/lib/python3.10/site-packages/langchain/llms/base.py", line 137, in generate output = self._generate(prompts, stop=stop) File "/opt/homebrew/lib/python3.10/site-packages/langchain/llms/base.py", line 324, in _generate text = self._call(prompt, stop=stop) File "/opt/homebrew/lib/python3.10/site-packages/langchain/llms/replicate.py", line 108, in _call return outputs[0] TypeError: 'generator' object is not subscriptable ```
2023-04-26 21:26:33 +00:00
"dolly_llm = Replicate(model=\"replicate/dolly-v2-12b:ef0e1aefc61f8e096ebe4db6b2bacc297daf2ef6899f0f7e001ec445893500e5\")\n",
"text2image = Replicate(model=\"stability-ai/stable-diffusion:db21e45d3f7023abc2a46ee38a23973f6dce16bb082a930b0c49861f96d1e5bf\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"First prompt in the chain"
]
},
{
"cell_type": "code",
"execution_count": 22,
"metadata": {},
"outputs": [],
"source": [
"prompt = PromptTemplate(\n",
" input_variables=[\"product\"],\n",
" template=\"What is a good name for a company that makes {product}?\",\n",
")\n",
"\n",
Fix Replicate llm response to handle iterator / multiple outputs (#3614) One of our users noticed a bug when calling streaming models. This is because those models return an iterator. So, I've updated the Replicate `_call` code to join together the output. The other advantage of this fix is that if you requested multiple outputs you would get them all – previously I was just returning output[0]. I also adjusted the demo docs to use dolly, because we're featuring that model right now and it's always hot, so people won't have to wait for the model to boot up. The error that this fixes: ``` > llm = Replicate(model=“replicate/flan-t5-xl:eec2f71c986dfa3b7a5d842d22e1130550f015720966bec48beaae059b19ef4c”) > llm(“hello”) > Traceback (most recent call last): File "/Users/charlieholtz/workspace/dev/python/main.py", line 15, in <module> print(llm(prompt)) File "/opt/homebrew/lib/python3.10/site-packages/langchain/llms/base.py", line 246, in __call__ return self.generate([prompt], stop=stop).generations[0][0].text File "/opt/homebrew/lib/python3.10/site-packages/langchain/llms/base.py", line 140, in generate raise e File "/opt/homebrew/lib/python3.10/site-packages/langchain/llms/base.py", line 137, in generate output = self._generate(prompts, stop=stop) File "/opt/homebrew/lib/python3.10/site-packages/langchain/llms/base.py", line 324, in _generate text = self._call(prompt, stop=stop) File "/opt/homebrew/lib/python3.10/site-packages/langchain/llms/replicate.py", line 108, in _call return outputs[0] TypeError: 'generator' object is not subscriptable ```
2023-04-26 21:26:33 +00:00
"chain = LLMChain(llm=dolly_llm, prompt=prompt)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Second prompt to get the logo for company description"
]
},
{
"cell_type": "code",
"execution_count": 23,
"metadata": {},
"outputs": [],
"source": [
"second_prompt = PromptTemplate(\n",
" input_variables=[\"company_name\"],\n",
" template=\"Write a description of a logo for this company: {company_name}\",\n",
")\n",
Fix Replicate llm response to handle iterator / multiple outputs (#3614) One of our users noticed a bug when calling streaming models. This is because those models return an iterator. So, I've updated the Replicate `_call` code to join together the output. The other advantage of this fix is that if you requested multiple outputs you would get them all – previously I was just returning output[0]. I also adjusted the demo docs to use dolly, because we're featuring that model right now and it's always hot, so people won't have to wait for the model to boot up. The error that this fixes: ``` > llm = Replicate(model=“replicate/flan-t5-xl:eec2f71c986dfa3b7a5d842d22e1130550f015720966bec48beaae059b19ef4c”) > llm(“hello”) > Traceback (most recent call last): File "/Users/charlieholtz/workspace/dev/python/main.py", line 15, in <module> print(llm(prompt)) File "/opt/homebrew/lib/python3.10/site-packages/langchain/llms/base.py", line 246, in __call__ return self.generate([prompt], stop=stop).generations[0][0].text File "/opt/homebrew/lib/python3.10/site-packages/langchain/llms/base.py", line 140, in generate raise e File "/opt/homebrew/lib/python3.10/site-packages/langchain/llms/base.py", line 137, in generate output = self._generate(prompts, stop=stop) File "/opt/homebrew/lib/python3.10/site-packages/langchain/llms/base.py", line 324, in _generate text = self._call(prompt, stop=stop) File "/opt/homebrew/lib/python3.10/site-packages/langchain/llms/replicate.py", line 108, in _call return outputs[0] TypeError: 'generator' object is not subscriptable ```
2023-04-26 21:26:33 +00:00
"chain_two = LLMChain(llm=dolly_llm, prompt=second_prompt)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Third prompt, let's create the image based on the description output from prompt 2"
]
},
{
"cell_type": "code",
"execution_count": 24,
"metadata": {},
"outputs": [],
"source": [
"third_prompt = PromptTemplate(\n",
" input_variables=[\"company_logo_description\"],\n",
" template=\"{company_logo_description}\",\n",
")\n",
"chain_three = LLMChain(llm=text2image, prompt=third_prompt)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now let's run it!"
]
},
{
"cell_type": "code",
"execution_count": 25,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new SimpleSequentialChain chain...\u001b[0m\n",
"\u001b[36;1m\u001b[1;3mnovelty socks\u001b[0m\n",
"\u001b[33;1m\u001b[1;3mtodd & co.\u001b[0m\n",
"\u001b[38;5;200m\u001b[1;3mhttps://replicate.delivery/pbxt/BedAP1PPBwXFfkmeD7xDygXO4BcvApp1uvWOwUdHM4tcQfvCB/out-0.png\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
"https://replicate.delivery/pbxt/BedAP1PPBwXFfkmeD7xDygXO4BcvApp1uvWOwUdHM4tcQfvCB/out-0.png\n"
]
}
],
"source": [
"# Run the chain specifying only the input variable for the first chain.\n",
"overall_chain = SimpleSequentialChain(chains=[chain, chain_two, chain_three], verbose=True)\n",
"catchphrase = overall_chain.run(\"colorful socks\")\n",
"print(catchphrase)"
]
},
{
"cell_type": "code",
"execution_count": 70,
"metadata": {},
"outputs": [
{
"data": {
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAwAAAAMACAIAAAAc45fZAAEAAElEQVR4nIz9d6AtyVUein/fqu69T7h57p07eUZ5NIqAJDIYRH5gGz9wwBgMGD8/jMHG4Tk988zPYBsw2IZHMAYDBhNsBLbIEhnJIJKkEZJG0mg0Od6Zm07Yu7vW9/ujQlfvc8Tzluaec3p3V1etWuFba1Wt4uHqijMIgQZQBAES+SOg/g4IAkhIIKByuf19408C+QnWK6z/ql5R+pOki6QJMJiDARyAQ+Ay8PQh3n0Fv/nup/7gnY8/+cR1YmmhV4SAAAkABTK1ptTV0hXlcaR+SHUgqSua30NCEqdm0n3p9vwHqdy4CDJ9RagSh5RqV8rold9UO5Nfml+RvsndUPMIy0yImXiZcOlH7VXbCstYmymEUl899bWZ4XQfOXUg9SG30nShmWeCE0OwvgBgIjHkxES0hkHy/WLTrtJ/hS2mjkggpDwtgFjbT/NVrpd2Nc1mblb1j8wjKjMuI9O3LoCWiVO7rMQPtTNkYaN6x0SgifXVzmV5bv7Z4M3p4tFbK0majhVaN6Q7IpyzljhRVhudq60cc6m8XEfv1eaV5kIWkDyPlQVUeLYMnKUraZI9qSCV6QaILIp57qQ6m66p4UIDiGTTkfRMeUUeO0EDdIQOaW6bSZxReOJdzCeqYWnVsbHcmvuXJ6debj9sJ7NcqW9tJlxF+nNTE/E2pAxZH0xzXsc16cb81OY0Nkp89kujr9pmN9X/US7aoPD8l0YQGs3TKsT5G1B6Ut9Vuf2oxfpwXdmkVHkpZ983HSwz2XRZm4PfGE/7Byf+LE/W9xe5bruj6fL0VeaeWedZ71FW6TO7p2p+RRVdNnt+0t1V+ZuowqwUAcIgBS4Cd5a4eDp8/AtuuuV02F3YmZN2/tTWyR0uFr4zrG01BHcL7DujvOtNEi2AMlEo+IJUlcsZseYURoMbMv3UEK4y5aSgq0ykAVHTRKUHMspwRQRqxb31OgId5JRlGjC/OgtJkkyVn6kTAFltqyRaM1WTNGaNIIqgPBsbEga4xPwNmIfGRHQYAB1CT0G/++TVN/7Bg39w3971p+jaWguh2+5oiDQ6BIMQvRKzAQFF79WuFI2kiThNl1MTdZzlckvadNtEfjRvQDHMLHYo6exWUGa6bxKNejlpHwpine/J6DXWiEUTcXY594fNbfO2J1ZSHeWMFNNd6aasRK1V5dM/jTpLPJ2hT7b/ZDJSnFGy9nGmiKfJqMiz6XGhqpo2pg43xijPW+6Iqs2tdPA0g1UdFR4sdCzEYRW/xr7MfINM39K9KnkzYFCVc+Gf0tOWzeb2q1Wsx2OhVnc2yjn1hqVrmt/b9IizH0cb14e/UBqdMeQRkzZ770xe5vw1kXiaKUFWmnMBbDUPQLmSl6NmmpFnME05Gx6rpiQLJTd7VglXx1BEtXl8GkaxQu3lo7a0ypYoAseRozw7kZEbpJ+M7oamKqIzM8MFcFLUhMPmYjpvAQVJYPPLmY3d+OroMI7cfBzTFsLMdMER7NWIxCRNx3UGTcd55E/Nfp1Uw7HSdPSxI38dJwNHetxeUCsgCc6zeXTiynRDkeNmvsqlGcknxVW/UKVANdNHhpauqbokpRfJuMyxGAk4J3Mp5ulSMNExMh6ux9XViEGMwqHZ/ukdf/6t5z79dTe/7mU33nxy9yQWp6A+uiJJMyYkBUGw9FoTBISsuLMMl95OurkwcnZaVFVw6hNp+YFGMbc+NSQYAQoOByxd90Rso/H64RpB5lE0gxNGo9yBydJnr0tuZkBW40Z6422WKaQEyEnLnZUmKbek51IUQpIsuWEgXBSNNCAaro/29if3f/rtT/3aB5978lmMY6C6hZuZyR0Gk8NJg9xt5kM2uqDY6ZlByVp3UsbppnTJWx4mlOYLRW2WEFg1M2BxIsHEM94oeBU+n0dhBBU/ltWQ5uazoa8syjRZZRYm1cGs6KpynwRBpkKGAlLb12ccIZJOQF561CrPmV4upFSRzCkeVaRRRVGBIJk9hw0brGrxErkbjZV4XIWD82XVrkk1ZsMyp+1cIfNllfGWhjVQVrtbhKiBhwS84ZIqUAJsbiJq85PlrNq8MGElUBHg6VE09paVIq16nlT2sR/Ofj9iwdC8d9KXmCncY59pbenG25Sx7WZPJrWVdW9+3sB2VJwksc5AJT8Bip64LDeR2F5SRqaiUZ7xoyRaka9qSXI/HZkFrfKbAKaokDvJ6QHlNsgskaxeWGHyHH+tcjRZ1AL168vLrHFTWycvIj3BiVxI3My5yWL7aHXYqlWtk54EWFWFbUxdoxFZ3JhWTKcY6PEccJSrOJs3Hv9VfXMT/mujp/PX5N9nElJ5aPIXmtfM+nj0w/mXDe023jl/YP6GD9P21N2NgRzXlNp+qujV+XOVV8gC8Yvpqv1IEfYSe5iIkFizgnaSXl+mYtAqCivPKXExASEBh2qA2JIMpYGiQhI7OjhCDCECckrwKAYaFBC5jr6/GnRwcksvvuPkZ37U+U964Q03n9veMW0pWla7JWWQdDuLdRVATfpDpW8TWRt92TC6FStUAzcbIaLaVCMcU4Df3Hh9NYDR5DTmDIJZMWXTDAGQUvxG6feaRyhvRQlLFB0HlLlV7ipFmFLWKnOnQYIYQAWN5CXxHU8d/Jfffvht7710bX8xem8x9CFQ6CwJvCOYezSj5znO2jMHgSYkwHTRq4mtIpz4JiuYaiwpNBgkKdCJmk0Ie2KV4n5mTT6zdF6VzDQFhXSJGydVB87QQjWFySfOb6wK0FiMSQZe7YSX74rI5DjFFICoSlvyIi5Vf0mglU5VjsxoZprzqslnuiy3rQo6682SCkJgjoNmJFfplakzKf0yWcW4odqcxvJWGWnhTKWJ2s5hEwHNppXJCM+ImWbX6ohrEKH8Pn2KRW45rCj3arPSb43bfUQrs8hzYcNqpgqn1ufaC5PGnWJSqEap4pDWQEyvViViY5WKzNQ+TZRu9RDrvQ2SkzWqACWYxiwKleUKc7BMLLKCVtK+nt9a/RWfiK+i7mtoLZM42QDbSAMkKdBsjLTsrqFIZVZzLTWS6NQ45XyiWmxdNNos8pL5ZFKGDXcICShqdm8RTLE0NxMlVgmfrrLYkKZraO6bhR7KjLfxe/yvfqbuz/hu1v3j+wC0YtGYqGbJQB1eoV9Jg7ZvY6XJ/P0zUWSJb/NYh6jl+w2BP37c7ZhbkzcT9KKkqp6fjXMyxY0ITtP5Yd6ahKcdZKv7JrM+2aQkRjN8nu4g4ZMD3wakq/ZPChYFqqccBAkH5UzWlhQ85gErgsEEwMxhBovuvUVp7eO6x+FHvvTMX3z9zR9568mbFsttRRskwWkGOQ2U4MncJJI2uukoOYpHXRbqFE5IFt5QA8CNZiyJiEbo6FUuDIHXVytByeJZ1tXmcsuyWKCrakOTm5VdsUxmB+EOMCkWzeeWBtAgh+gURQpOhE4iMZieiOHn3n3pv771kfc9cmU9LMRlZ30IxIiOBk8ROHqySJInAlhKpiWg0Mr6jKsaPIDCiKj6LisRVk2miVxVh2xomGoYU4zL84QUgc6838hwmYXJ9E0zVIMfyOw3Vx2FU7OqztOfiVv/Iac/qlFqZSfRrMjOpFFJpthfNdEAOEGbejFLSCZYIVymOiubVWVTw2MT5TmFxVpPt6iKaURFEjIFkxBnwhYFk2kvzR2srD1nCrfanDz9Rf5LvK2JCSa/h8WuVKPbkKKMss5UgnfTUrYpjYL6c7YWa/ZpPPNqWCdK1jk97slJvU/qrzSjadIqJSaZnNRqEm0192fWZ2W3Gekmi1a7VH85Ki8bUWCWXpcoiypToCi4EqHMatsnc5ilt5GXHFae4hmZednGXa2w1+QgVq2VRWA2+VX+oGJcsoZt1ErhD5aJqJ6FKqEw42YUxV1ACRuWrtnmqj5KGLtBOBuou5B4Bs6muaiJpk23OGulYrQnJFdHOAUdjtGmH+bT8jhnV5tfGskv8aSJv2ZwJ301Qbyph7W1Gbu2Qz+2c3/Cl+3EbnSinZZ6S4nbzfyxgikb8d+kwgb3zH4psoFJqzY
"text/plain": [
"<PIL.PngImagePlugin.PngImageFile image mode=RGB size=768x768 at 0x11EFD83A0>"
]
},
"execution_count": 70,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"response = requests.get(\"https://replicate.delivery/pbxt/eq6foRJngThCAEBqse3nL3Km2MBfLnWQNd0Hy2SQRo2LuprCB/out-0.png\")\n",
"img = Image.open(BytesIO(response.content))\n",
"img"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.6"
},
"vscode": {
"interpreter": {
"hash": "a0a0263b650d907a3bfe41c0f8d6a63a071b884df3cfdc1579f00cdc1aed6b03"
}
}
},
"nbformat": 4,
"nbformat_minor": 4
}