You cannot select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
langchain/docs/docs/guides/safety/amazon_comprehend_chain.ipynb

1391 lines
43 KiB
Plaintext

{
"cells": [
{
"cell_type": "markdown",
"id": "25a3f834-60b7-4c21-bfb4-ad16d30fd3f7",
"metadata": {},
"source": [
"# Amazon Comprehend Moderation Chain\n",
"---"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "2c4236d8-4054-473d-84a4-87a4db278a62",
"metadata": {},
"outputs": [],
"source": [
"%pip install boto3 nltk"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3f8518ad-c762-413c-b8c9-f1c211fc311d",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"import boto3\n",
"\n",
"comprehend_client = boto3.client('comprehend', region_name='us-east-1')"
]
},
{
"cell_type": "markdown",
"id": "d1f0ba28",
"metadata": {},
"source": [
"Import `AmazonComprehendModerationChain`"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "74550d74-3c01-4ba7-ad32-ca66d955d001",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain_experimental.comprehend_moderation import AmazonComprehendModerationChain"
]
},
{
"cell_type": "markdown",
"id": "f00c338b-de9f-40e5-9295-93c9e26058e3",
"metadata": {},
"source": [
"Initialize an instance of the Amazon Comprehend Moderation Chain to be used with your LLM chain"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "cde58cc6-ff83-493a-9aed-93d755f984a7",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"comprehend_moderation = AmazonComprehendModerationChain(\n",
" client=comprehend_client, #optional\n",
" verbose=True\n",
")"
]
},
{
"cell_type": "markdown",
"id": "ad646d01-82d2-435a-939b-c450693857ab",
"metadata": {},
"source": [
"Using it with your LLM chain. \n",
"\n",
"**Note**: The example below uses the _Fake LLM_ from LangChain, but same concept could be applied to other LLMs."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "0efa1946-d4a9-467a-920a-a8fb78720fc2",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain.prompts import PromptTemplate\nfrom langchain.chains import LLMChain\n",
"from langchain.llms.fake import FakeListLLM\n",
"from langchain_experimental.comprehend_moderation.base_moderation_exceptions import ModerationPiiError\n",
"\n",
"template = \"\"\"Question: {question}\n",
"\n",
"Answer:\"\"\"\n",
"\n",
"prompt = PromptTemplate(template=template, input_variables=[\"question\"])\n",
"\n",
"responses = [\n",
" \"Final Answer: A credit card number looks like 1289-2321-1123-2387. A fake SSN number looks like 323-22-9980. John Doe's phone number is (999)253-9876.\", \n",
" \"Final Answer: This is a really shitty way of constructing a birdhouse. This is fucking insane to think that any birds would actually create their motherfucking nests here.\"\n",
"]\n",
"llm = FakeListLLM(responses=responses)\n",
"\n",
"llm_chain = LLMChain(prompt=prompt, llm=llm)\n",
"\n",
"chain = (\n",
" prompt \n",
" | comprehend_moderation \n",
" | {llm_chain.input_keys[0]: lambda x: x['output'] } \n",
" | llm_chain \n",
" | { \"input\": lambda x: x['text'] } \n",
" | comprehend_moderation \n",
")\n",
"\n",
"try:\n",
" response = chain.invoke({\"question\": \"A sample SSN number looks like this 123-456-7890. Can you give me some more samples?\"})\n",
"except ModerationPiiError as e:\n",
" print(e.message)\n",
"else:\n",
" print(response['output'])\n"
]
},
{
"cell_type": "markdown",
"id": "6da25d96-0d96-4c01-94ae-a2ead17f10aa",
"metadata": {},
"source": [
"## Using `moderation_config` to customize your moderation\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "bfd550e7-5012-41fa-9546-8b78ddf1c673",
"metadata": {},
"source": [
"Use Amazon Comprehend Moderation with a configuration to control what moderations you wish to perform and what actions should be taken for each of them. There are three different moderations that happen when no configuration is passed as demonstrated above. These moderations are:\n",
"\n",
"- PII (Personally Identifiable Information) checks \n",
"- Toxicity content detection\n",
"- Intention detection\n",
"\n",
"Here is an example of a moderation config."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d6e8900a-44ef-4967-bde8-b88af282139d",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain_experimental.comprehend_moderation import BaseModerationActions, BaseModerationFilters\n",
"\n",
"moderation_config = { \n",
" \"filters\":[ \n",
" BaseModerationFilters.PII, \n",
" BaseModerationFilters.TOXICITY,\n",
" BaseModerationFilters.INTENT\n",
" ],\n",
" \"pii\":{ \n",
" \"action\": BaseModerationActions.ALLOW, \n",
" \"threshold\":0.5, \n",
" \"labels\":[\"SSN\"],\n",
" \"mask_character\": \"X\"\n",
" },\n",
" \"toxicity\":{ \n",
" \"action\": BaseModerationActions.STOP, \n",
" \"threshold\":0.5\n",
" },\n",
" \"intent\":{ \n",
" \"action\": BaseModerationActions.STOP, \n",
" \"threshold\":0.5\n",
" }\n",
"}"
]
},
{
"cell_type": "markdown",
"id": "3634376b-5938-43df-9ed6-70ca7e99290f",
"metadata": {},
"source": [
"At the core of the configuration you have three filters specified in the `filters` key:\n",
"\n",
"1. `BaseModerationFilters.PII`\n",
"2. `BaseModerationFilters.TOXICITY`\n",
"3. `BaseModerationFilters.INTENT`\n",
"\n",
"And an `action` key that defines two possible actions for each moderation function:\n",
"\n",
"1. `BaseModerationActions.ALLOW` - `allows` the prompt to pass through but masks detected PII in case of PII check. The default behavior is to run and redact all PII entities. If there is an entity specified in the `labels` field, then only those entities will go through the PII check and masked.\n",
"2. `BaseModerationActions.STOP` - `stops` the prompt from passing through to the next step in case any PII, Toxicity, or incorrect Intent is detected. The action of `BaseModerationActions.STOP` will raise a Python `Exception` essentially stopping the chain in progress.\n",
"\n",
"Using the configuration in the previous cell will perform PII checks and will allow the prompt to pass through however it will mask any SSN numbers present in either the prompt or the LLM output.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3a4f7e65-f733-4863-ae6d-34c9faffd849",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"comp_moderation_with_config = AmazonComprehendModerationChain(\n",
" moderation_config=moderation_config, #specify the configuration\n",
" client=comprehend_client, #optionally pass the Boto3 Client\n",
" verbose=True\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a25e6f93-765b-4f99-8c1c-929157dbd4aa",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"template = \"\"\"Question: {question}\n",
"\n",
"Answer:\"\"\"\n",
"\n",
"prompt = PromptTemplate(template=template, input_variables=[\"question\"])\n",
"\n",
"responses = [\n",
" \"Final Answer: A credit card number looks like 1289-2321-1123-2387. A fake SSN number looks like 323-22-9980. John Doe's phone number is (999)253-9876.\", \n",
" \"Final Answer: This is a really shitty way of constructing a birdhouse. This is fucking insane to think that any birds would actually create their motherfucking nests here.\"\n",
"]\n",
"llm = FakeListLLM(responses=responses)\n",
"\n",
"llm_chain = LLMChain(prompt=prompt, llm=llm)\n",
"\n",
"chain = ( \n",
" prompt \n",
" | comp_moderation_with_config \n",
" | {llm_chain.input_keys[0]: lambda x: x['output'] } \n",
" | llm_chain \n",
" | { \"input\": lambda x: x['text'] } \n",
" | comp_moderation_with_config \n",
")\n",
"\n",
"try:\n",
" response = chain.invoke({\"question\": \"A sample SSN number looks like this 123-456-7890. Can you give me some more samples?\"})\n",
"except Exception as e:\n",
" print(str(e))\n",
"else:\n",
" print(response['output'])"
]
},
{
"cell_type": "markdown",
"id": "ba890681-feeb-43ca-a0d5-9c11d2d9de3e",
"metadata": {
"tags": []
},
"source": [
"## Unique ID, and Moderation Callbacks\n",
"---\n",
"\n",
"When Amazon Comprehend moderation action is specified as `STOP`, the chain will raise one of the following exceptions-\n",
" - `ModerationPiiError`, for PII checks\n",
" - `ModerationToxicityError`, for Toxicity checks \n",
" - `ModerationIntentionError` for Intent checks\n",
"\n",
"In addition to the moderation configuration, the `AmazonComprehendModerationChain` can also be initialized with the following parameters\n",
"\n",
"- `unique_id` [Optional] a string parameter. This parameter can be used to pass any string value or ID. For example, in a chat application you may want to keep track of abusive users, in this case you can pass the user's username/email id etc. This defaults to `None`.\n",
"\n",
"- `moderation_callback` [Optional] the `BaseModerationCallbackHandler` that will be called asynchronously (non-blocking to the chain). Callback functions are useful when you want to perform additional actions when the moderation functions are executed, for example logging into a database, or writing a log file. You can override three functions by subclassing `BaseModerationCallbackHandler` - `on_after_pii()`, `on_after_toxicity()`, and `on_after_intent()`. Note that all three functions must be `async` functions. These callback functions receive two arguments:\n",
" - `moderation_beacon` a dictionary that will contain information about the moderation function, the full response from Amazon Comprehend model, a unique chain id, the moderation status, and the input string which was validated. The dictionary is of the following schema-\n",
" \n",
" ```\n",
" { \n",
" 'moderation_chain_id': 'xxx-xxx-xxx', # Unique chain ID\n",
" 'moderation_type': 'Toxicity' | 'PII' | 'Intent', \n",
" 'moderation_status': 'LABELS_FOUND' | 'LABELS_NOT_FOUND',\n",
" 'moderation_input': 'A sample SSN number looks like this 123-456-7890. Can you give me some more samples?',\n",
" 'moderation_output': {...} #Full Amazon Comprehend PII, Toxicity, or Intent Model Output\n",
" }\n",
" ```\n",
" \n",
" - `unique_id` if passed to the `AmazonComprehendModerationChain`"
]
},
{
"cell_type": "markdown",
"id": "3c178835-0264-4ac6-aef4-091d2993d06c",
"metadata": {},
"source": [
"<div class=\"alert alert-block alert-info\"> <b>NOTE:</b> <code>moderation_callback</code> is different from LangChain Chain Callbacks. You can still use LangChain Chain callbacks with <code>AmazonComprehendModerationChain</code> via the callbacks parameter. Example: <br/>\n",
"<pre>\n",
"from langchain.callbacks.stdout import StdOutCallbackHandler\n",
"comp_moderation_with_config = AmazonComprehendModerationChain(verbose=True, callbacks=[StdOutCallbackHandler()])\n",
"</pre>\n",
"</div>"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "0ec38536-8cc9-408e-860b-e4a439283643",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain_experimental.comprehend_moderation import BaseModerationCallbackHandler"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1be744c7-3f99-4165-bf7f-9c5c249bbb53",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"# Define callback handlers by subclassing BaseModerationCallbackHandler\n",
"\n",
"class MyModCallback(BaseModerationCallbackHandler):\n",
" \n",
" async def on_after_pii(self, output_beacon, unique_id):\n",
" import json\n",
" moderation_type = output_beacon['moderation_type']\n",
" chain_id = output_beacon['moderation_chain_id']\n",
" with open(f'output-{moderation_type}-{chain_id}.json', 'w') as file:\n",
" data = { 'beacon_data': output_beacon, 'unique_id': unique_id }\n",
" json.dump(data, file)\n",
" \n",
" '''\n",
" async def on_after_toxicity(self, output_beacon, unique_id):\n",
" pass\n",
" \n",
" async def on_after_intent(self, output_beacon, unique_id):\n",
" pass\n",
" '''\n",
" \n",
"\n",
"my_callback = MyModCallback()"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "362a3fe0-f09f-411e-9df1-d79b3e87510c",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"moderation_config = { \n",
" \"filters\": [ \n",
" BaseModerationFilters.PII, \n",
" BaseModerationFilters.TOXICITY\n",
" ],\n",
" \"pii\":{ \n",
" \"action\": BaseModerationActions.STOP, \n",
" \"threshold\":0.5, \n",
" \"labels\":[\"SSN\"], \n",
" \"mask_character\": \"X\" \n",
" },\n",
" \"toxicity\":{ \n",
" \"action\": BaseModerationActions.STOP, \n",
" \"threshold\":0.5 \n",
" }\n",
"}\n",
"\n",
"comp_moderation_with_config = AmazonComprehendModerationChain(\n",
" moderation_config=moderation_config, # specify the configuration\n",
" client=comprehend_client, # optionally pass the Boto3 Client\n",
" unique_id='john.doe@email.com', # A unique ID\n",
" moderation_callback=my_callback, # BaseModerationCallbackHandler\n",
" verbose=True\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "2af07937-67ea-4738-8343-c73d4d28c2cc",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain.prompts import PromptTemplate\nfrom langchain.chains import LLMChain\n",
"from langchain.llms.fake import FakeListLLM\n",
"\n",
"template = \"\"\"Question: {question}\n",
"\n",
"Answer:\"\"\"\n",
"\n",
"prompt = PromptTemplate(template=template, input_variables=[\"question\"])\n",
"\n",
"responses = [\n",
" \"Final Answer: A credit card number looks like 1289-2321-1123-2387. A fake SSN number looks like 323-22-9980. John Doe's phone number is (999)253-9876.\", \n",
" \"Final Answer: This is a really shitty way of constructing a birdhouse. This is fucking insane to think that any birds would actually create their motherfucking nests here.\"\n",
"]\n",
"\n",
"llm = FakeListLLM(responses=responses)\n",
"\n",
"llm_chain = LLMChain(prompt=prompt, llm=llm)\n",
"\n",
"chain = (\n",
" prompt \n",
" | comp_moderation_with_config \n",
" | {llm_chain.input_keys[0]: lambda x: x['output'] } \n",
" | llm_chain \n",
" | { \"input\": lambda x: x['text'] } \n",
" | comp_moderation_with_config \n",
") \n",
"\n",
"try:\n",
" response = chain.invoke({\"question\": \"A sample SSN number looks like this 123-456-7890. Can you give me some more samples?\"})\n",
"except Exception as e:\n",
" print(str(e))\n",
"else:\n",
" print(response['output'])"
]
},
{
"cell_type": "markdown",
"id": "706454b2-2efa-4d41-abc8-ccf2b4e87822",
"metadata": {
"tags": []
},
"source": [
"## `moderation_config` and moderation execution order\n",
"---\n",
"\n",
"If `AmazonComprehendModerationChain` is not initialized with any `moderation_config` then the default action is `STOP` and default order of moderation check is as follows.\n",
"\n",
"```\n",
"AmazonComprehendModerationChain\n",
"│\n",
"└──Check PII with Stop Action\n",
" ├── Callback (if available)\n",
" ├── Label Found ⟶ [Error Stop]\n",
" └── No Label Found \n",
" └──Check Toxicity with Stop Action\n",
" ├── Callback (if available)\n",
" ├── Label Found ⟶ [Error Stop]\n",
" └── No Label Found\n",
" └──Check Intent with Stop Action\n",
" ├── Callback (if available)\n",
" ├── Label Found ⟶ [Error Stop]\n",
" └── No Label Found\n",
" └── Return Prompt\n",
"```\n",
"\n",
"If any of the check raises exception then the subsequent checks will not be performed. If a `callback` is provided in this case, then it will be called for each of the checks that have been performed. For example, in the case above, if the Chain fails due to presence of PII then the Toxicity and Intent checks will not be performed.\n",
"\n",
"You can override the execution order by passing `moderation_config` and simply specifying the desired order in the `filters` key of the configuration. In case you use `moderation_config` then the order of the checks as specified in the `filters` key will be maintained. For example, in the configuration below, first Toxicity check will be performed, then PII, and finally Intent validation will be performed. In this case, `AmazonComprehendModerationChain` will perform the desired checks in the specified order with default values of each model `kwargs`.\n",
"\n",
"```python\n",
"moderation_config = { \n",
" \"filters\":[ BaseModerationFilters.TOXICITY, \n",
" BaseModerationFilters.PII, \n",
" BaseModerationFilters.INTENT]\n",
" }\n",
"```\n",
"\n",
"Model `kwargs` are specified by the `pii`, `toxicity`, and `intent` keys within the `moderation_config` dictionary. For example, in the `moderation_config` below, the default order of moderation is overriden and the `pii` & `toxicity` model `kwargs` have been overriden. For `intent` the chain's default `kwargs` will be used.\n",
"\n",
"```python\n",
" moderation_config = { \n",
" \"filters\":[ BaseModerationFilters.TOXICITY, \n",
" BaseModerationFilters.PII, \n",
" BaseModerationFilters.INTENT],\n",
" \"pii\":{ \"action\": BaseModerationActions.ALLOW, \n",
" \"threshold\":0.5, \n",
" \"labels\":[\"SSN\"], \n",
" \"mask_character\": \"X\" },\n",
" \"toxicity\":{ \"action\": BaseModerationActions.STOP, \n",
" \"threshold\":0.5 }\n",
" }\n",
"```\n",
"\n",
"1. For a list of PII labels see Amazon Comprehend Universal PII entity types - https://docs.aws.amazon.com/comprehend/latest/dg/how-pii.html#how-pii-types\n",
"2. Following are the list of available Toxicity labels-\n",
" - `HATE_SPEECH`: Speech that criticizes, insults, denounces or dehumanizes a person or a group on the basis of an identity, be it race, ethnicity, gender identity, religion, sexual orientation, ability, national origin, or another identity-group.\n",
" - `GRAPHIC`: Speech that uses visually descriptive, detailed and unpleasantly vivid imagery is considered as graphic. Such language is often made verbose so as to amplify an insult, discomfort or harm to the recipient.\n",
" - `HARASSMENT_OR_ABUSE`: Speech that imposes disruptive power dynamics between the speaker and hearer, regardless of intent, seeks to affect the psychological well-being of the recipient, or objectifies a person should be classified as Harassment.\n",
" - `SEXUAL`: Speech that indicates sexual interest, activity or arousal by using direct or indirect references to body parts or physical traits or sex is considered as toxic with toxicityType \"sexual\". \n",
" - `VIOLENCE_OR_THREAT`: Speech that includes threats which seek to inflict pain, injury or hostility towards a person or group.\n",
" - `INSULT`: Speech that includes demeaning, humiliating, mocking, insulting, or belittling language.\n",
" - `PROFANITY`: Speech that contains words, phrases or acronyms that are impolite, vulgar, or offensive is considered as profane.\n",
"3. For a list of Intent labels refer to documentation [link here]"
]
},
{
"cell_type": "markdown",
"id": "78905aec-55ae-4fc3-a23b-8a69bd1e33f2",
"metadata": {},
"source": [
"# Examples\n",
"---\n",
"\n",
"## With Hugging Face Hub Models\n",
"\n",
"Get your API Key from Hugging Face hub - https://huggingface.co/docs/api-inference/quicktour#get-your-api-token"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "359b9627-769b-46ce-8be2-c8a5cf7728ba",
"metadata": {
"scrolled": true,
"tags": []
},
"outputs": [],
"source": [
"%pip install huggingface_hub"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "41b7ea98-ad16-4454-8f12-c03c17113a86",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"%env HUGGINGFACEHUB_API_TOKEN=\"<HUGGINGFACEHUB_API_TOKEN>\""
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3b235427-cc06-4c07-874b-1f67c2d1f924",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"# See https://huggingface.co/models?pipeline_tag=text-generation&sort=downloads for some other options\n",
"repo_id = \"google/flan-t5-xxl\" \n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "9d86e256-34fb-4c8e-8092-1a4f863a5c96",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain.llms import HuggingFaceHub\n",
"from langchain.prompts import PromptTemplate\nfrom langchain.chains import LLMChain\n",
"\n",
"template = \"\"\"Question: {question}\n",
"\n",
"Answer:\"\"\"\n",
"\n",
"prompt = PromptTemplate(template=template, input_variables=[\"question\"])\n",
"\n",
"llm = HuggingFaceHub(\n",
" repo_id=repo_id, model_kwargs={\"temperature\": 0.5, \"max_length\": 256}\n",
")\n",
"llm_chain = LLMChain(prompt=prompt, llm=llm)"
]
},
{
"cell_type": "markdown",
"id": "ad603796-ad8b-4599-9022-a486f1c1b89a",
"metadata": {},
"source": [
"Create a configuration and initialize an Amazon Comprehend Moderation chain"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "decc3409-5be5-433d-b6da-38b9e5c5ee3f",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"moderation_config = { \n",
" \"filters\":[ BaseModerationFilters.PII, BaseModerationFilters.TOXICITY, BaseModerationFilters.INTENT ],\n",
" \"pii\":{\"action\": BaseModerationActions.ALLOW, \"threshold\":0.5, \"labels\":[\"SSN\",\"CREDIT_DEBIT_NUMBER\"], \"mask_character\": \"X\"},\n",
" \"toxicity\":{\"action\": BaseModerationActions.STOP, \"threshold\":0.5},\n",
" \"intent\":{\"action\": BaseModerationActions.ALLOW, \"threshold\":0.5,},\n",
" }\n",
"\n",
"# without any callback\n",
"amazon_comp_moderation = AmazonComprehendModerationChain(moderation_config=moderation_config, \n",
" client=comprehend_client,\n",
" verbose=True)\n",
"\n",
"# with callback\n",
"amazon_comp_moderation_out = AmazonComprehendModerationChain(moderation_config=moderation_config, \n",
" client=comprehend_client,\n",
" moderation_callback=my_callback,\n",
" verbose=True)"
]
},
{
"cell_type": "markdown",
"id": "b1256bc8-1321-4624-9e8a-a2d4a8df59bf",
"metadata": {},
"source": [
"The `moderation_config` will now prevent any inputs and model outputs containing obscene words or sentences, bad intent, or PII with entities other than SSN with score above threshold or 0.5 or 50%. If it finds Pii entities - SSN - it will redact them before allowing the call to proceed. "
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "0337becc-7c3c-483e-a55c-a225226cb9ee",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"chain = (\n",
" prompt \n",
" | amazon_comp_moderation \n",
" | {llm_chain.input_keys[0]: lambda x: x['output'] } \n",
" | llm_chain \n",
" | { \"input\": lambda x: x['text'] } \n",
" | amazon_comp_moderation_out\n",
")\n",
"\n",
"try:\n",
" response = chain.invoke({\"question\": \"My AnyCompany Financial Services, LLC credit card account 1111-0000-1111-0008 has 24$ due by July 31st. Can you give me some more credit car number samples?\"})\n",
"except Exception as e:\n",
" print(str(e))\n",
"else:\n",
" print(response['output'])"
]
},
{
"cell_type": "markdown",
"id": "ee52c7b8-6526-4f68-a2b3-b5ad3cf82489",
"metadata": {
"tags": []
},
"source": [
"---\n",
"## With Amazon SageMaker Jumpstart\n",
"\n",
"The exmaple below shows how to use Amazon Comprehend Moderation chain with an Amazon SageMaker Jumpstart hosted LLM. You should have an Amazon SageMaker Jumpstart hosted LLM endpoint within your AWS Account. "
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "cd49d075-bc23-4ab8-a92c-0ddbbc436c30",
"metadata": {},
"outputs": [],
"source": [
"endpoint_name = \"<SAGEMAKER_ENDPOINT_NAME>\" # replace with your SageMaker Endpoint name"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "5978a5e6-667d-4926-842c-d965f88e5640",
"metadata": {},
"outputs": [],
"source": [
"from langchain.llms import SagemakerEndpoint\n",
"from langchain.llms.sagemaker_endpoint import LLMContentHandler\n",
"from langchain.chains import LLMChain\n",
"from langchain.prompts import load_prompt, PromptTemplate\n",
"import json\n",
"\n",
"class ContentHandler(LLMContentHandler):\n",
" content_type = \"application/json\"\n",
" accepts = \"application/json\"\n",
"\n",
" def transform_input(self, prompt: str, model_kwargs: dict) -> bytes:\n",
" input_str = json.dumps({\"text_inputs\": prompt, **model_kwargs})\n",
" return input_str.encode('utf-8')\n",
" \n",
" def transform_output(self, output: bytes) -> str:\n",
" response_json = json.loads(output.read().decode(\"utf-8\"))\n",
" return response_json['generated_texts'][0]\n",
"\n",
"content_handler = ContentHandler()\n",
"\n",
"#prompt template for input text\n",
"llm_prompt = PromptTemplate(input_variables=[\"input_text\"], template=\"{input_text}\")\n",
"\n",
"llm_chain = LLMChain(\n",
" llm=SagemakerEndpoint(\n",
" endpoint_name=endpoint_name, \n",
" region_name='us-east-1',\n",
" model_kwargs={\"temperature\":0.97,\n",
" \"max_length\": 200,\n",
" \"num_return_sequences\": 3,\n",
" \"top_k\": 50,\n",
" \"top_p\": 0.95,\n",
" \"do_sample\": True},\n",
" content_handler=content_handler\n",
" ),\n",
" prompt=llm_prompt\n",
")"
]
},
{
"cell_type": "markdown",
"id": "d577b036-99a4-47fe-9a8e-4a34aa4cd88d",
"metadata": {},
"source": [
"Create a configuration and initialize an Amazon Comprehend Moderation chain"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "859da135-94d3-4a9c-970e-a873913592e2",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"moderation_config = { \n",
" \"filters\":[ BaseModerationFilters.PII, BaseModerationFilters.TOXICITY ],\n",
" \"pii\":{\"action\": BaseModerationActions.ALLOW, \"threshold\":0.5, \"labels\":[\"SSN\"], \"mask_character\": \"X\"},\n",
" \"toxicity\":{\"action\": BaseModerationActions.STOP, \"threshold\":0.5},\n",
" \"intent\":{\"action\": BaseModerationActions.ALLOW, \"threshold\":0.5,},\n",
" }\n",
"\n",
"amazon_comp_moderation = AmazonComprehendModerationChain(moderation_config=moderation_config, \n",
" client=comprehend_client ,\n",
" verbose=True)"
]
},
{
"cell_type": "markdown",
"id": "9abb191f-7a96-4077-8c30-b9ddc225bd6b",
"metadata": {},
"source": [
"The `moderation_config` will now prevent any inputs and model outputs containing obscene words or sentences, bad intent, or Pii with entities other than SSN with score above threshold or 0.5 or 50%. If it finds Pii entities - SSN - it will redact them before allowing the call to proceed. "
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "6db5aa2a-9c00-42a0-8e24-c5ba39994f7d",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"chain = (\n",
" prompt \n",
" | amazon_comp_moderation \n",
" | {llm_chain.input_keys[0]: lambda x: x['output'] } \n",
" | llm_chain \n",
" | { \"input\": lambda x: x['text'] } \n",
" | amazon_comp_moderation \n",
")\n",
"\n",
"try:\n",
" response = chain.invoke({\"question\": \"My AnyCompany Financial Services, LLC credit card account 1111-0000-1111-0008 has 24$ due by July 31st. Can you give me some more samples?\"})\n",
"except Exception as e:\n",
" print(str(e))\n",
"else:\n",
" print(response['output'])"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "7fdfedf9-1a0a-4a9f-a6b0-d9ed2dbaa5ad",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"availableInstances": [
{
"_defaultOrder": 0,
"_isFastLaunch": true,
"category": "General purpose",
"gpuNum": 0,
"hideHardwareSpecs": false,
"memoryGiB": 4,
"name": "ml.t3.medium",
"vcpuNum": 2
},
{
"_defaultOrder": 1,
"_isFastLaunch": false,
"category": "General purpose",
"gpuNum": 0,
"hideHardwareSpecs": false,
"memoryGiB": 8,
"name": "ml.t3.large",
"vcpuNum": 2
},
{
"_defaultOrder": 2,
"_isFastLaunch": false,
"category": "General purpose",
"gpuNum": 0,
"hideHardwareSpecs": false,
"memoryGiB": 16,
"name": "ml.t3.xlarge",
"vcpuNum": 4
},
{
"_defaultOrder": 3,
"_isFastLaunch": false,
"category": "General purpose",
"gpuNum": 0,
"hideHardwareSpecs": false,
"memoryGiB": 32,
"name": "ml.t3.2xlarge",
"vcpuNum": 8
},
{
"_defaultOrder": 4,
"_isFastLaunch": true,
"category": "General purpose",
"gpuNum": 0,
"hideHardwareSpecs": false,
"memoryGiB": 8,
"name": "ml.m5.large",
"vcpuNum": 2
},
{
"_defaultOrder": 5,
"_isFastLaunch": false,
"category": "General purpose",
"gpuNum": 0,
"hideHardwareSpecs": false,
"memoryGiB": 16,
"name": "ml.m5.xlarge",
"vcpuNum": 4
},
{
"_defaultOrder": 6,
"_isFastLaunch": false,
"category": "General purpose",
"gpuNum": 0,
"hideHardwareSpecs": false,
"memoryGiB": 32,
"name": "ml.m5.2xlarge",
"vcpuNum": 8
},
{
"_defaultOrder": 7,
"_isFastLaunch": false,
"category": "General purpose",
"gpuNum": 0,
"hideHardwareSpecs": false,
"memoryGiB": 64,
"name": "ml.m5.4xlarge",
"vcpuNum": 16
},
{
"_defaultOrder": 8,
"_isFastLaunch": false,
"category": "General purpose",
"gpuNum": 0,
"hideHardwareSpecs": false,
"memoryGiB": 128,
"name": "ml.m5.8xlarge",
"vcpuNum": 32
},
{
"_defaultOrder": 9,
"_isFastLaunch": false,
"category": "General purpose",
"gpuNum": 0,
"hideHardwareSpecs": false,
"memoryGiB": 192,
"name": "ml.m5.12xlarge",
"vcpuNum": 48
},
{
"_defaultOrder": 10,
"_isFastLaunch": false,
"category": "General purpose",
"gpuNum": 0,
"hideHardwareSpecs": false,
"memoryGiB": 256,
"name": "ml.m5.16xlarge",
"vcpuNum": 64
},
{
"_defaultOrder": 11,
"_isFastLaunch": false,
"category": "General purpose",
"gpuNum": 0,
"hideHardwareSpecs": false,
"memoryGiB": 384,
"name": "ml.m5.24xlarge",
"vcpuNum": 96
},
{
"_defaultOrder": 12,
"_isFastLaunch": false,
"category": "General purpose",
"gpuNum": 0,
"hideHardwareSpecs": false,
"memoryGiB": 8,
"name": "ml.m5d.large",
"vcpuNum": 2
},
{
"_defaultOrder": 13,
"_isFastLaunch": false,
"category": "General purpose",
"gpuNum": 0,
"hideHardwareSpecs": false,
"memoryGiB": 16,
"name": "ml.m5d.xlarge",
"vcpuNum": 4
},
{
"_defaultOrder": 14,
"_isFastLaunch": false,
"category": "General purpose",
"gpuNum": 0,
"hideHardwareSpecs": false,
"memoryGiB": 32,
"name": "ml.m5d.2xlarge",
"vcpuNum": 8
},
{
"_defaultOrder": 15,
"_isFastLaunch": false,
"category": "General purpose",
"gpuNum": 0,
"hideHardwareSpecs": false,
"memoryGiB": 64,
"name": "ml.m5d.4xlarge",
"vcpuNum": 16
},
{
"_defaultOrder": 16,
"_isFastLaunch": false,
"category": "General purpose",
"gpuNum": 0,
"hideHardwareSpecs": false,
"memoryGiB": 128,
"name": "ml.m5d.8xlarge",
"vcpuNum": 32
},
{
"_defaultOrder": 17,
"_isFastLaunch": false,
"category": "General purpose",
"gpuNum": 0,
"hideHardwareSpecs": false,
"memoryGiB": 192,
"name": "ml.m5d.12xlarge",
"vcpuNum": 48
},
{
"_defaultOrder": 18,
"_isFastLaunch": false,
"category": "General purpose",
"gpuNum": 0,
"hideHardwareSpecs": false,
"memoryGiB": 256,
"name": "ml.m5d.16xlarge",
"vcpuNum": 64
},
{
"_defaultOrder": 19,
"_isFastLaunch": false,
"category": "General purpose",
"gpuNum": 0,
"hideHardwareSpecs": false,
"memoryGiB": 384,
"name": "ml.m5d.24xlarge",
"vcpuNum": 96
},
{
"_defaultOrder": 20,
"_isFastLaunch": false,
"category": "General purpose",
"gpuNum": 0,
"hideHardwareSpecs": true,
"memoryGiB": 0,
"name": "ml.geospatial.interactive",
"supportedImageNames": [
"sagemaker-geospatial-v1-0"
],
"vcpuNum": 0
},
{
"_defaultOrder": 21,
"_isFastLaunch": true,
"category": "Compute optimized",
"gpuNum": 0,
"hideHardwareSpecs": false,
"memoryGiB": 4,
"name": "ml.c5.large",
"vcpuNum": 2
},
{
"_defaultOrder": 22,
"_isFastLaunch": false,
"category": "Compute optimized",
"gpuNum": 0,
"hideHardwareSpecs": false,
"memoryGiB": 8,
"name": "ml.c5.xlarge",
"vcpuNum": 4
},
{
"_defaultOrder": 23,
"_isFastLaunch": false,
"category": "Compute optimized",
"gpuNum": 0,
"hideHardwareSpecs": false,
"memoryGiB": 16,
"name": "ml.c5.2xlarge",
"vcpuNum": 8
},
{
"_defaultOrder": 24,
"_isFastLaunch": false,
"category": "Compute optimized",
"gpuNum": 0,
"hideHardwareSpecs": false,
"memoryGiB": 32,
"name": "ml.c5.4xlarge",
"vcpuNum": 16
},
{
"_defaultOrder": 25,
"_isFastLaunch": false,
"category": "Compute optimized",
"gpuNum": 0,
"hideHardwareSpecs": false,
"memoryGiB": 72,
"name": "ml.c5.9xlarge",
"vcpuNum": 36
},
{
"_defaultOrder": 26,
"_isFastLaunch": false,
"category": "Compute optimized",
"gpuNum": 0,
"hideHardwareSpecs": false,
"memoryGiB": 96,
"name": "ml.c5.12xlarge",
"vcpuNum": 48
},
{
"_defaultOrder": 27,
"_isFastLaunch": false,
"category": "Compute optimized",
"gpuNum": 0,
"hideHardwareSpecs": false,
"memoryGiB": 144,
"name": "ml.c5.18xlarge",
"vcpuNum": 72
},
{
"_defaultOrder": 28,
"_isFastLaunch": false,
"category": "Compute optimized",
"gpuNum": 0,
"hideHardwareSpecs": false,
"memoryGiB": 192,
"name": "ml.c5.24xlarge",
"vcpuNum": 96
},
{
"_defaultOrder": 29,
"_isFastLaunch": true,
"category": "Accelerated computing",
"gpuNum": 1,
"hideHardwareSpecs": false,
"memoryGiB": 16,
"name": "ml.g4dn.xlarge",
"vcpuNum": 4
},
{
"_defaultOrder": 30,
"_isFastLaunch": false,
"category": "Accelerated computing",
"gpuNum": 1,
"hideHardwareSpecs": false,
"memoryGiB": 32,
"name": "ml.g4dn.2xlarge",
"vcpuNum": 8
},
{
"_defaultOrder": 31,
"_isFastLaunch": false,
"category": "Accelerated computing",
"gpuNum": 1,
"hideHardwareSpecs": false,
"memoryGiB": 64,
"name": "ml.g4dn.4xlarge",
"vcpuNum": 16
},
{
"_defaultOrder": 32,
"_isFastLaunch": false,
"category": "Accelerated computing",
"gpuNum": 1,
"hideHardwareSpecs": false,
"memoryGiB": 128,
"name": "ml.g4dn.8xlarge",
"vcpuNum": 32
},
{
"_defaultOrder": 33,
"_isFastLaunch": false,
"category": "Accelerated computing",
"gpuNum": 4,
"hideHardwareSpecs": false,
"memoryGiB": 192,
"name": "ml.g4dn.12xlarge",
"vcpuNum": 48
},
{
"_defaultOrder": 34,
"_isFastLaunch": false,
"category": "Accelerated computing",
"gpuNum": 1,
"hideHardwareSpecs": false,
"memoryGiB": 256,
"name": "ml.g4dn.16xlarge",
"vcpuNum": 64
},
{
"_defaultOrder": 35,
"_isFastLaunch": false,
"category": "Accelerated computing",
"gpuNum": 1,
"hideHardwareSpecs": false,
"memoryGiB": 61,
"name": "ml.p3.2xlarge",
"vcpuNum": 8
},
{
"_defaultOrder": 36,
"_isFastLaunch": false,
"category": "Accelerated computing",
"gpuNum": 4,
"hideHardwareSpecs": false,
"memoryGiB": 244,
"name": "ml.p3.8xlarge",
"vcpuNum": 32
},
{
"_defaultOrder": 37,
"_isFastLaunch": false,
"category": "Accelerated computing",
"gpuNum": 8,
"hideHardwareSpecs": false,
"memoryGiB": 488,
"name": "ml.p3.16xlarge",
"vcpuNum": 64
},
{
"_defaultOrder": 38,
"_isFastLaunch": false,
"category": "Accelerated computing",
"gpuNum": 8,
"hideHardwareSpecs": false,
"memoryGiB": 768,
"name": "ml.p3dn.24xlarge",
"vcpuNum": 96
},
{
"_defaultOrder": 39,
"_isFastLaunch": false,
"category": "Memory Optimized",
"gpuNum": 0,
"hideHardwareSpecs": false,
"memoryGiB": 16,
"name": "ml.r5.large",
"vcpuNum": 2
},
{
"_defaultOrder": 40,
"_isFastLaunch": false,
"category": "Memory Optimized",
"gpuNum": 0,
"hideHardwareSpecs": false,
"memoryGiB": 32,
"name": "ml.r5.xlarge",
"vcpuNum": 4
},
{
"_defaultOrder": 41,
"_isFastLaunch": false,
"category": "Memory Optimized",
"gpuNum": 0,
"hideHardwareSpecs": false,
"memoryGiB": 64,
"name": "ml.r5.2xlarge",
"vcpuNum": 8
},
{
"_defaultOrder": 42,
"_isFastLaunch": false,
"category": "Memory Optimized",
"gpuNum": 0,
"hideHardwareSpecs": false,
"memoryGiB": 128,
"name": "ml.r5.4xlarge",
"vcpuNum": 16
},
{
"_defaultOrder": 43,
"_isFastLaunch": false,
"category": "Memory Optimized",
"gpuNum": 0,
"hideHardwareSpecs": false,
"memoryGiB": 256,
"name": "ml.r5.8xlarge",
"vcpuNum": 32
},
{
"_defaultOrder": 44,
"_isFastLaunch": false,
"category": "Memory Optimized",
"gpuNum": 0,
"hideHardwareSpecs": false,
"memoryGiB": 384,
"name": "ml.r5.12xlarge",
"vcpuNum": 48
},
{
"_defaultOrder": 45,
"_isFastLaunch": false,
"category": "Memory Optimized",
"gpuNum": 0,
"hideHardwareSpecs": false,
"memoryGiB": 512,
"name": "ml.r5.16xlarge",
"vcpuNum": 64
},
{
"_defaultOrder": 46,
"_isFastLaunch": false,
"category": "Memory Optimized",
"gpuNum": 0,
"hideHardwareSpecs": false,
"memoryGiB": 768,
"name": "ml.r5.24xlarge",
"vcpuNum": 96
},
{
"_defaultOrder": 47,
"_isFastLaunch": false,
"category": "Accelerated computing",
"gpuNum": 1,
"hideHardwareSpecs": false,
"memoryGiB": 16,
"name": "ml.g5.xlarge",
"vcpuNum": 4
},
{
"_defaultOrder": 48,
"_isFastLaunch": false,
"category": "Accelerated computing",
"gpuNum": 1,
"hideHardwareSpecs": false,
"memoryGiB": 32,
"name": "ml.g5.2xlarge",
"vcpuNum": 8
},
{
"_defaultOrder": 49,
"_isFastLaunch": false,
"category": "Accelerated computing",
"gpuNum": 1,
"hideHardwareSpecs": false,
"memoryGiB": 64,
"name": "ml.g5.4xlarge",
"vcpuNum": 16
},
{
"_defaultOrder": 50,
"_isFastLaunch": false,
"category": "Accelerated computing",
"gpuNum": 1,
"hideHardwareSpecs": false,
"memoryGiB": 128,
"name": "ml.g5.8xlarge",
"vcpuNum": 32
},
{
"_defaultOrder": 51,
"_isFastLaunch": false,
"category": "Accelerated computing",
"gpuNum": 1,
"hideHardwareSpecs": false,
"memoryGiB": 256,
"name": "ml.g5.16xlarge",
"vcpuNum": 64
},
{
"_defaultOrder": 52,
"_isFastLaunch": false,
"category": "Accelerated computing",
"gpuNum": 4,
"hideHardwareSpecs": false,
"memoryGiB": 192,
"name": "ml.g5.12xlarge",
"vcpuNum": 48
},
{
"_defaultOrder": 53,
"_isFastLaunch": false,
"category": "Accelerated computing",
"gpuNum": 4,
"hideHardwareSpecs": false,
"memoryGiB": 384,
"name": "ml.g5.24xlarge",
"vcpuNum": 96
},
{
"_defaultOrder": 54,
"_isFastLaunch": false,
"category": "Accelerated computing",
"gpuNum": 8,
"hideHardwareSpecs": false,
"memoryGiB": 768,
"name": "ml.g5.48xlarge",
"vcpuNum": 192
},
{
"_defaultOrder": 55,
"_isFastLaunch": false,
"category": "Accelerated computing",
"gpuNum": 8,
"hideHardwareSpecs": false,
"memoryGiB": 1152,
"name": "ml.p4d.24xlarge",
"vcpuNum": 96
},
{
"_defaultOrder": 56,
"_isFastLaunch": false,
"category": "Accelerated computing",
"gpuNum": 8,
"hideHardwareSpecs": false,
"memoryGiB": 1152,
"name": "ml.p4de.24xlarge",
"vcpuNum": 96
}
],
"instance_type": "ml.t3.medium",
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.0"
}
},
"nbformat": 4,
"nbformat_minor": 5
}