You cannot select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
langchain/docs/docs/integrations/llms/edenai.ipynb

322 lines
7.9 KiB
Plaintext

{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Eden AI"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Eden AI is revolutionizing the AI landscape by uniting the best AI providers, empowering users to unlock limitless possibilities and tap into the true potential of artificial intelligence. With an all-in-one comprehensive and hassle-free platform, it allows users to deploy AI features to production lightning fast, enabling effortless access to the full breadth of AI capabilities via a single API. (website: https://edenai.co/)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This example goes over how to use LangChain to interact with Eden AI models\n",
"\n",
"-----------------------------------------------------------------------------------\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Accessing the EDENAI's API requires an API key, \n",
"\n",
"which you can get by creating an account https://app.edenai.run/user/register and heading here https://app.edenai.run/admin/account/settings\n",
"\n",
"Once we have a key we'll want to set it as an environment variable by running:\n",
"\n",
"```bash\n",
"export EDENAI_API_KEY=\"...\"\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"If you'd prefer not to set an environment variable you can pass the key in directly via the edenai_api_key named parameter\n",
"\n",
" when initiating the EdenAI LLM class:\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.llms import EdenAI"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"llm = EdenAI(edenai_api_key=\"...\", provider=\"openai\", temperature=0.2, max_tokens=250)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Calling a model\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The EdenAI API brings together various providers, each offering multiple models.\n",
"\n",
"To access a specific model, you can simply add 'model' during instantiation.\n",
"\n",
"For instance, let's explore the models provided by OpenAI, such as GPT3.5 "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### text generation"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain.chains import LLMChain\n",
"from langchain_core.prompts import PromptTemplate\n",
"\n",
"llm = EdenAI(\n",
" feature=\"text\",\n",
" provider=\"openai\",\n",
" model=\"gpt-3.5-turbo-instruct\",\n",
" temperature=0.2,\n",
" max_tokens=250,\n",
")\n",
"\n",
"prompt = \"\"\"\n",
"User: Answer the following yes/no question by reasoning step by step. Can a dog drive a car?\n",
"Assistant:\n",
"\"\"\"\n",
"\n",
"llm(prompt)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### image generation"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import base64\n",
"from io import BytesIO\n",
"\n",
"from PIL import Image\n",
"\n",
"\n",
"def print_base64_image(base64_string):\n",
" # Decode the base64 string into binary data\n",
" decoded_data = base64.b64decode(base64_string)\n",
"\n",
" # Create an in-memory stream to read the binary data\n",
" image_stream = BytesIO(decoded_data)\n",
"\n",
" # Open the image using PIL\n",
" image = Image.open(image_stream)\n",
"\n",
" # Display the image\n",
" image.show()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"text2image = EdenAI(feature=\"image\", provider=\"openai\", resolution=\"512x512\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"image_output = text2image(\"A cat riding a motorcycle by Picasso\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print_base64_image(image_output)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### text generation with callback"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler\n",
"from langchain_community.llms import EdenAI\n",
"\n",
"llm = EdenAI(\n",
" callbacks=[StreamingStdOutCallbackHandler()],\n",
" feature=\"text\",\n",
" provider=\"openai\",\n",
" temperature=0.2,\n",
" max_tokens=250,\n",
")\n",
"prompt = \"\"\"\n",
"User: Answer the following yes/no question by reasoning step by step. Can a dog drive a car?\n",
"Assistant:\n",
"\"\"\"\n",
"print(llm.invoke(prompt))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Chaining Calls"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain.chains import LLMChain, SimpleSequentialChain\n",
"from langchain_core.prompts import PromptTemplate"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"llm = EdenAI(feature=\"text\", provider=\"openai\", temperature=0.2, max_tokens=250)\n",
"text2image = EdenAI(feature=\"image\", provider=\"openai\", resolution=\"512x512\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"prompt = PromptTemplate(\n",
" input_variables=[\"product\"],\n",
" template=\"What is a good name for a company that makes {product}?\",\n",
")\n",
"\n",
"chain = LLMChain(llm=llm, prompt=prompt)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"second_prompt = PromptTemplate(\n",
" input_variables=[\"company_name\"],\n",
" template=\"Write a description of a logo for this company: {company_name}, the logo should not contain text at all \",\n",
")\n",
"chain_two = LLMChain(llm=llm, prompt=second_prompt)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"third_prompt = PromptTemplate(\n",
" input_variables=[\"company_logo_description\"],\n",
" template=\"{company_logo_description}\",\n",
")\n",
"chain_three = LLMChain(llm=text2image, prompt=third_prompt)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Run the chain specifying only the input variable for the first chain.\n",
"overall_chain = SimpleSequentialChain(\n",
" chains=[chain, chain_two, chain_three], verbose=True\n",
")\n",
"output = overall_chain.run(\"hats\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# print the image\n",
"print_base64_image(output)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.3"
}
},
"nbformat": 4,
"nbformat_minor": 2
}