mirror of
https://github.com/hwchase17/langchain
synced 2024-11-04 06:00:26 +00:00
285 lines
8.4 KiB
Plaintext
285 lines
8.4 KiB
Plaintext
|
{
|
||
|
"cells": [
|
||
|
{
|
||
|
"cell_type": "markdown",
|
||
|
"metadata": {},
|
||
|
"source": [
|
||
|
"# Amazon Personalize\n",
|
||
|
"\n",
|
||
|
"[Amazon Personalize](https://docs.aws.amazon.com/personalize/latest/dg/what-is-personalize.html) is a fully managed machine learning service that uses your data to generate item recommendations for your users. It can also generate user segments based on the users' affinity for certain items or item metadata.\n",
|
||
|
"\n",
|
||
|
"This notebook goes through how to use Amazon Personalize Chain. You need a Amazon Personalize campaign_arn or a recommender_arn before you get started with the below notebook.\n",
|
||
|
"\n",
|
||
|
"Following is a [tutorial](https://github.com/aws-samples/retail-demo-store/blob/master/workshop/1-Personalization/Lab-1-Introduction-and-data-preparation.ipynb) to setup a campaign_arn/recommender_arn on Amazon Personalize. Once the campaign_arn/recommender_arn is setup, you can use it in the langchain ecosystem. \n"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "markdown",
|
||
|
"metadata": {},
|
||
|
"source": [
|
||
|
"## 1. Install Dependencies"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "code",
|
||
|
"execution_count": null,
|
||
|
"metadata": {
|
||
|
"scrolled": true
|
||
|
},
|
||
|
"outputs": [],
|
||
|
"source": [
|
||
|
"!pip install boto3"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "markdown",
|
||
|
"metadata": {},
|
||
|
"source": [
|
||
|
"## 2. Sample Use-cases"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "markdown",
|
||
|
"metadata": {},
|
||
|
"source": [
|
||
|
"### 2.1 [Use-case-1] Setup Amazon Personalize Client and retrieve recommendations"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "code",
|
||
|
"execution_count": null,
|
||
|
"metadata": {},
|
||
|
"outputs": [],
|
||
|
"source": [
|
||
|
"from langchain_experimental.recommenders import AmazonPersonalize\n",
|
||
|
"\n",
|
||
|
"recommender_arn = \"<insert_arn>\"\n",
|
||
|
"\n",
|
||
|
"client = AmazonPersonalize(\n",
|
||
|
" credentials_profile_name=\"default\",\n",
|
||
|
" region_name=\"us-west-2\",\n",
|
||
|
" recommender_arn=recommender_arn,\n",
|
||
|
")\n",
|
||
|
"client.get_recommendations(user_id=\"1\")"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "markdown",
|
||
|
"metadata": {
|
||
|
"collapsed": false,
|
||
|
"jupyter": {
|
||
|
"outputs_hidden": false
|
||
|
}
|
||
|
},
|
||
|
"source": [
|
||
|
"### 2.2 [Use-case-2] Invoke Personalize Chain for summarizing results"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "code",
|
||
|
"execution_count": null,
|
||
|
"metadata": {
|
||
|
"collapsed": false,
|
||
|
"jupyter": {
|
||
|
"outputs_hidden": false
|
||
|
}
|
||
|
},
|
||
|
"outputs": [],
|
||
|
"source": [
|
||
|
"from langchain.llms.bedrock import Bedrock\n",
|
||
|
"from langchain_experimental.recommenders import AmazonPersonalizeChain\n",
|
||
|
"\n",
|
||
|
"bedrock_llm = Bedrock(model_id=\"anthropic.claude-v2\", region_name=\"us-west-2\")\n",
|
||
|
"\n",
|
||
|
"# Create personalize chain\n",
|
||
|
"# Use return_direct=True if you do not want summary\n",
|
||
|
"chain = AmazonPersonalizeChain.from_llm(\n",
|
||
|
" llm=bedrock_llm, client=client, return_direct=False\n",
|
||
|
")\n",
|
||
|
"response = chain({\"user_id\": \"1\"})\n",
|
||
|
"print(response)"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "markdown",
|
||
|
"metadata": {},
|
||
|
"source": [
|
||
|
"### 2.3 [Use-Case-3] Invoke Amazon Personalize Chain using your own prompt"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "code",
|
||
|
"execution_count": null,
|
||
|
"metadata": {},
|
||
|
"outputs": [],
|
||
|
"source": [
|
||
|
"from langchain.prompts.prompt import PromptTemplate\n",
|
||
|
"\n",
|
||
|
"RANDOM_PROMPT_QUERY = \"\"\"\n",
|
||
|
"You are a skilled publicist. Write a high-converting marketing email advertising several movies available in a video-on-demand streaming platform next week, \n",
|
||
|
" given the movie and user information below. Your email will leverage the power of storytelling and persuasive language. \n",
|
||
|
" The movies to recommend and their information is contained in the <movie> tag. \n",
|
||
|
" All movies in the <movie> tag must be recommended. Give a summary of the movies and why the human should watch them. \n",
|
||
|
" Put the email between <email> tags.\n",
|
||
|
"\n",
|
||
|
" <movie>\n",
|
||
|
" {result} \n",
|
||
|
" </movie>\n",
|
||
|
"\n",
|
||
|
" Assistant:\n",
|
||
|
" \"\"\"\n",
|
||
|
"\n",
|
||
|
"RANDOM_PROMPT = PromptTemplate(input_variables=[\"result\"], template=RANDOM_PROMPT_QUERY)\n",
|
||
|
"\n",
|
||
|
"chain = AmazonPersonalizeChain.from_llm(\n",
|
||
|
" llm=bedrock_llm, client=client, return_direct=False, prompt_template=RANDOM_PROMPT\n",
|
||
|
")\n",
|
||
|
"chain.run({\"user_id\": \"1\", \"item_id\": \"234\"})"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "markdown",
|
||
|
"metadata": {},
|
||
|
"source": [
|
||
|
"### 2.4 [Use-case-4] Invoke Amazon Personalize in a Sequential Chain "
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "code",
|
||
|
"execution_count": null,
|
||
|
"metadata": {},
|
||
|
"outputs": [],
|
||
|
"source": [
|
||
|
"from langchain.chains import LLMChain, SequentialChain\n",
|
||
|
"\n",
|
||
|
"RANDOM_PROMPT_QUERY_2 = \"\"\"\n",
|
||
|
"You are a skilled publicist. Write a high-converting marketing email advertising several movies available in a video-on-demand streaming platform next week, \n",
|
||
|
" given the movie and user information below. Your email will leverage the power of storytelling and persuasive language. \n",
|
||
|
" You want the email to impress the user, so make it appealing to them.\n",
|
||
|
" The movies to recommend and their information is contained in the <movie> tag. \n",
|
||
|
" All movies in the <movie> tag must be recommended. Give a summary of the movies and why the human should watch them. \n",
|
||
|
" Put the email between <email> tags.\n",
|
||
|
"\n",
|
||
|
" <movie>\n",
|
||
|
" {result}\n",
|
||
|
" </movie>\n",
|
||
|
"\n",
|
||
|
" Assistant:\n",
|
||
|
" \"\"\"\n",
|
||
|
"\n",
|
||
|
"RANDOM_PROMPT_2 = PromptTemplate(\n",
|
||
|
" input_variables=[\"result\"], template=RANDOM_PROMPT_QUERY_2\n",
|
||
|
")\n",
|
||
|
"personalize_chain_instance = AmazonPersonalizeChain.from_llm(\n",
|
||
|
" llm=bedrock_llm, client=client, return_direct=True\n",
|
||
|
")\n",
|
||
|
"random_chain_instance = LLMChain(llm=bedrock_llm, prompt=RANDOM_PROMPT_2)\n",
|
||
|
"overall_chain = SequentialChain(\n",
|
||
|
" chains=[personalize_chain_instance, random_chain_instance],\n",
|
||
|
" input_variables=[\"user_id\"],\n",
|
||
|
" verbose=True,\n",
|
||
|
")\n",
|
||
|
"overall_chain.run({\"user_id\": \"1\", \"item_id\": \"234\"})"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "markdown",
|
||
|
"metadata": {
|
||
|
"collapsed": false,
|
||
|
"jupyter": {
|
||
|
"outputs_hidden": false
|
||
|
}
|
||
|
},
|
||
|
"source": [
|
||
|
"### 2.5 [Use-case-5] Invoke Amazon Personalize and retrieve metadata "
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "code",
|
||
|
"execution_count": null,
|
||
|
"metadata": {
|
||
|
"collapsed": false,
|
||
|
"jupyter": {
|
||
|
"outputs_hidden": false
|
||
|
}
|
||
|
},
|
||
|
"outputs": [],
|
||
|
"source": [
|
||
|
"recommender_arn = \"<insert_arn>\"\n",
|
||
|
"metadata_column_names = [\n",
|
||
|
" \"<insert metadataColumnName-1>\",\n",
|
||
|
" \"<insert metadataColumnName-2>\",\n",
|
||
|
"]\n",
|
||
|
"metadataMap = {\"ITEMS\": metadata_column_names}\n",
|
||
|
"\n",
|
||
|
"client = AmazonPersonalize(\n",
|
||
|
" credentials_profile_name=\"default\",\n",
|
||
|
" region_name=\"us-west-2\",\n",
|
||
|
" recommender_arn=recommender_arn,\n",
|
||
|
")\n",
|
||
|
"client.get_recommendations(user_id=\"1\", metadataColumns=metadataMap)"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "markdown",
|
||
|
"metadata": {
|
||
|
"collapsed": false,
|
||
|
"jupyter": {
|
||
|
"outputs_hidden": false
|
||
|
}
|
||
|
},
|
||
|
"source": [
|
||
|
"### 2.6 [Use-Case 6] Invoke Personalize Chain with returned metadata for summarizing results"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "code",
|
||
|
"execution_count": null,
|
||
|
"metadata": {
|
||
|
"collapsed": false,
|
||
|
"jupyter": {
|
||
|
"outputs_hidden": false
|
||
|
}
|
||
|
},
|
||
|
"outputs": [],
|
||
|
"source": [
|
||
|
"bedrock_llm = Bedrock(model_id=\"anthropic.claude-v2\", region_name=\"us-west-2\")\n",
|
||
|
"\n",
|
||
|
"# Create personalize chain\n",
|
||
|
"# Use return_direct=True if you do not want summary\n",
|
||
|
"chain = AmazonPersonalizeChain.from_llm(\n",
|
||
|
" llm=bedrock_llm, client=client, return_direct=False\n",
|
||
|
")\n",
|
||
|
"response = chain({\"user_id\": \"1\", \"metadata_columns\": metadataMap})\n",
|
||
|
"print(response)"
|
||
|
]
|
||
|
}
|
||
|
],
|
||
|
"metadata": {
|
||
|
"kernelspec": {
|
||
|
"display_name": "Python 3 (ipykernel)",
|
||
|
"language": "python",
|
||
|
"name": "python3"
|
||
|
},
|
||
|
"language_info": {
|
||
|
"codemirror_mode": {
|
||
|
"name": "ipython",
|
||
|
"version": 3
|
||
|
},
|
||
|
"file_extension": ".py",
|
||
|
"mimetype": "text/x-python",
|
||
|
"name": "python",
|
||
|
"nbconvert_exporter": "python",
|
||
|
"pygments_lexer": "ipython3",
|
||
|
"version": "3.11.7"
|
||
|
},
|
||
|
"vscode": {
|
||
|
"interpreter": {
|
||
|
"hash": "15e58ce194949b77a891bd4339ce3d86a9bd138e905926019517993f97db9e6c"
|
||
|
}
|
||
|
}
|
||
|
},
|
||
|
"nbformat": 4,
|
||
|
"nbformat_minor": 4
|
||
|
}
|