reorder eval docs (#11738)

cc @leo-gan
This commit is contained in:
Bagatur 2023-10-12 15:46:55 -07:00 committed by GitHub
parent 35965df20d
commit 01b7b46908
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
3 changed files with 919 additions and 890 deletions

View File

@ -1,281 +1,291 @@
{ {
"cells": [ "cells": [
{ {
"cell_type": "markdown", "cell_type": "raw",
"id": "657d2c8c-54b4-42a3-9f02-bdefa0ed6728", "id": "5046d96f-d578-4d5b-9a7e-43b28cafe61d",
"metadata": {}, "metadata": {},
"source": [ "source": [
"# Custom Pairwise Evaluator\n", "---\n",
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/guides/evaluation/comparison/custom.ipynb)\n", "sidebar_position: 2\n",
"\n", "title: Custom pairwise evaluator\n",
"You can make your own pairwise string evaluators by inheriting from `PairwiseStringEvaluator` class and overwriting the `_evaluate_string_pairs` method (and the `_aevaluate_string_pairs` method if you want to use the evaluator asynchronously).\n", "---"
"\n", ]
"In this example, you will make a simple custom evaluator that just returns whether the first prediction has more whitespace tokenized 'words' than the second.\n", },
"\n", {
"You can check out the reference docs for the [PairwiseStringEvaluator interface](https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.schema.PairwiseStringEvaluator.html#langchain.evaluation.schema.PairwiseStringEvaluator) for more info.\n" "cell_type": "markdown",
] "id": "657d2c8c-54b4-42a3-9f02-bdefa0ed6728",
}, "metadata": {},
{ "source": [
"cell_type": "code", "[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/guides/evaluation/comparison/custom.ipynb)\n",
"execution_count": 1, "\n",
"id": "93f3a653-d198-4291-973c-8d1adba338b2", "You can make your own pairwise string evaluators by inheriting from `PairwiseStringEvaluator` class and overwriting the `_evaluate_string_pairs` method (and the `_aevaluate_string_pairs` method if you want to use the evaluator asynchronously).\n",
"metadata": { "\n",
"tags": [] "In this example, you will make a simple custom evaluator that just returns whether the first prediction has more whitespace tokenized 'words' than the second.\n",
}, "\n",
"outputs": [], "You can check out the reference docs for the [PairwiseStringEvaluator interface](https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.schema.PairwiseStringEvaluator.html#langchain.evaluation.schema.PairwiseStringEvaluator) for more info.\n"
"source": [ ]
"from typing import Optional, Any\n", },
"from langchain.evaluation import PairwiseStringEvaluator\n", {
"\n", "cell_type": "code",
"\n", "execution_count": 1,
"class LengthComparisonPairwiseEvaluator(PairwiseStringEvaluator):\n", "id": "93f3a653-d198-4291-973c-8d1adba338b2",
" \"\"\"\n", "metadata": {
" Custom evaluator to compare two strings.\n", "tags": []
" \"\"\"\n", },
"\n", "outputs": [],
" def _evaluate_string_pairs(\n", "source": [
" self,\n", "from typing import Optional, Any\n",
" *,\n", "from langchain.evaluation import PairwiseStringEvaluator\n",
" prediction: str,\n", "\n",
" prediction_b: str,\n", "\n",
" reference: Optional[str] = None,\n", "class LengthComparisonPairwiseEvaluator(PairwiseStringEvaluator):\n",
" input: Optional[str] = None,\n", " \"\"\"\n",
" **kwargs: Any,\n", " Custom evaluator to compare two strings.\n",
" ) -> dict:\n", " \"\"\"\n",
" score = int(len(prediction.split()) > len(prediction_b.split()))\n", "\n",
" return {\"score\": score}" " def _evaluate_string_pairs(\n",
] " self,\n",
}, " *,\n",
{ " prediction: str,\n",
"cell_type": "code", " prediction_b: str,\n",
"execution_count": 2, " reference: Optional[str] = None,\n",
"id": "7d4a77c3-07a7-4076-8e7f-f9bca0d6c290", " input: Optional[str] = None,\n",
"metadata": { " **kwargs: Any,\n",
"tags": [] " ) -> dict:\n",
}, " score = int(len(prediction.split()) > len(prediction_b.split()))\n",
"outputs": [ " return {\"score\": score}"
{ ]
"data": { },
"text/plain": [ {
"{'score': 1}" "cell_type": "code",
] "execution_count": 2,
}, "id": "7d4a77c3-07a7-4076-8e7f-f9bca0d6c290",
"execution_count": 2, "metadata": {
"metadata": {}, "tags": []
"output_type": "execute_result" },
} "outputs": [
], {
"source": [ "data": {
"evaluator = LengthComparisonPairwiseEvaluator()\n", "text/plain": [
"\n", "{'score': 1}"
"evaluator.evaluate_string_pairs(\n", ]
" prediction=\"The quick brown fox jumped over the lazy dog.\",\n", },
" prediction_b=\"The quick brown fox jumped over the dog.\",\n", "execution_count": 2,
")" "metadata": {},
] "output_type": "execute_result"
}, }
{ ],
"cell_type": "markdown", "source": [
"id": "d90f128f-6f49-42a1-b05a-3aea568ee03b", "evaluator = LengthComparisonPairwiseEvaluator()\n",
"metadata": {}, "\n",
"source": [ "evaluator.evaluate_string_pairs(\n",
"## LLM-Based Example\n", " prediction=\"The quick brown fox jumped over the lazy dog.\",\n",
"\n", " prediction_b=\"The quick brown fox jumped over the dog.\",\n",
"That example was simple to illustrate the API, but it wasn't very useful in practice. Below, use an LLM with some custom instructions to form a simple preference scorer similar to the built-in [PairwiseStringEvalChain](https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.comparison.eval_chain.PairwiseStringEvalChain.html#langchain.evaluation.comparison.eval_chain.PairwiseStringEvalChain). We will use `ChatAnthropic` for the evaluator chain." ")"
] ]
}, },
{ {
"cell_type": "code", "cell_type": "markdown",
"execution_count": 3, "id": "d90f128f-6f49-42a1-b05a-3aea568ee03b",
"id": "b4b43098-4d96-417b-a8a9-b3e75779cfe8", "metadata": {},
"metadata": { "source": [
"tags": [] "## LLM-Based Example\n",
}, "\n",
"outputs": [], "That example was simple to illustrate the API, but it wasn't very useful in practice. Below, use an LLM with some custom instructions to form a simple preference scorer similar to the built-in [PairwiseStringEvalChain](https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.comparison.eval_chain.PairwiseStringEvalChain.html#langchain.evaluation.comparison.eval_chain.PairwiseStringEvalChain). We will use `ChatAnthropic` for the evaluator chain."
"source": [ ]
"# %pip install anthropic\n", },
"# %env ANTHROPIC_API_KEY=YOUR_API_KEY" {
] "cell_type": "code",
}, "execution_count": 3,
{ "id": "b4b43098-4d96-417b-a8a9-b3e75779cfe8",
"cell_type": "code", "metadata": {
"execution_count": 4, "tags": []
"id": "b6e978ab-48f1-47ff-9506-e13b1a50be6e", },
"metadata": { "outputs": [],
"tags": [] "source": [
}, "# %pip install anthropic\n",
"outputs": [], "# %env ANTHROPIC_API_KEY=YOUR_API_KEY"
"source": [ ]
"from typing import Optional, Any\n", },
"from langchain.evaluation import PairwiseStringEvaluator\n", {
"from langchain.chat_models import ChatAnthropic\n", "cell_type": "code",
"from langchain.chains import LLMChain\n", "execution_count": 4,
"\n", "id": "b6e978ab-48f1-47ff-9506-e13b1a50be6e",
"\n", "metadata": {
"class CustomPreferenceEvaluator(PairwiseStringEvaluator):\n", "tags": []
" \"\"\"\n", },
" Custom evaluator to compare two strings using a custom LLMChain.\n", "outputs": [],
" \"\"\"\n", "source": [
"\n", "from typing import Optional, Any\n",
" def __init__(self) -> None:\n", "from langchain.evaluation import PairwiseStringEvaluator\n",
" llm = ChatAnthropic(model=\"claude-2\", temperature=0)\n", "from langchain.chat_models import ChatAnthropic\n",
" self.eval_chain = LLMChain.from_string(\n", "from langchain.chains import LLMChain\n",
" llm,\n", "\n",
" \"\"\"Which option is preferred? Do not take order into account. Evaluate based on accuracy and helpfulness. If neither is preferred, respond with C. Provide your reasoning, then finish with Preference: A/B/C\n", "\n",
"\n", "class CustomPreferenceEvaluator(PairwiseStringEvaluator):\n",
"Input: How do I get the path of the parent directory in python 3.8?\n", " \"\"\"\n",
"Option A: You can use the following code:\n", " Custom evaluator to compare two strings using a custom LLMChain.\n",
"```python\n", " \"\"\"\n",
"import os\n", "\n",
"\n", " def __init__(self) -> None:\n",
"os.path.dirname(os.path.dirname(os.path.abspath(__file__)))\n", " llm = ChatAnthropic(model=\"claude-2\", temperature=0)\n",
"```\n", " self.eval_chain = LLMChain.from_string(\n",
"Option B: You can use the following code:\n", " llm,\n",
"```python\n", " \"\"\"Which option is preferred? Do not take order into account. Evaluate based on accuracy and helpfulness. If neither is preferred, respond with C. Provide your reasoning, then finish with Preference: A/B/C\n",
"from pathlib import Path\n", "\n",
"Path(__file__).absolute().parent\n", "Input: How do I get the path of the parent directory in python 3.8?\n",
"```\n", "Option A: You can use the following code:\n",
"Reasoning: Both options return the same result. However, since option B is more concise and easily understand, it is preferred.\n", "```python\n",
"Preference: B\n", "import os\n",
"\n", "\n",
"Which option is preferred? Do not take order into account. Evaluate based on accuracy and helpfulness. If neither is preferred, respond with C. Provide your reasoning, then finish with Preference: A/B/C\n", "os.path.dirname(os.path.dirname(os.path.abspath(__file__)))\n",
"Input: {input}\n", "```\n",
"Option A: {prediction}\n", "Option B: You can use the following code:\n",
"Option B: {prediction_b}\n", "```python\n",
"Reasoning:\"\"\",\n", "from pathlib import Path\n",
" )\n", "Path(__file__).absolute().parent\n",
"\n", "```\n",
" @property\n", "Reasoning: Both options return the same result. However, since option B is more concise and easily understand, it is preferred.\n",
" def requires_input(self) -> bool:\n", "Preference: B\n",
" return True\n", "\n",
"\n", "Which option is preferred? Do not take order into account. Evaluate based on accuracy and helpfulness. If neither is preferred, respond with C. Provide your reasoning, then finish with Preference: A/B/C\n",
" @property\n", "Input: {input}\n",
" def requires_reference(self) -> bool:\n", "Option A: {prediction}\n",
" return False\n", "Option B: {prediction_b}\n",
"\n", "Reasoning:\"\"\",\n",
" def _evaluate_string_pairs(\n", " )\n",
" self,\n", "\n",
" *,\n", " @property\n",
" prediction: str,\n", " def requires_input(self) -> bool:\n",
" prediction_b: str,\n", " return True\n",
" reference: Optional[str] = None,\n", "\n",
" input: Optional[str] = None,\n", " @property\n",
" **kwargs: Any,\n", " def requires_reference(self) -> bool:\n",
" ) -> dict:\n", " return False\n",
" result = self.eval_chain(\n", "\n",
" {\n", " def _evaluate_string_pairs(\n",
" \"input\": input,\n", " self,\n",
" \"prediction\": prediction,\n", " *,\n",
" \"prediction_b\": prediction_b,\n", " prediction: str,\n",
" \"stop\": [\"Which option is preferred?\"],\n", " prediction_b: str,\n",
" },\n", " reference: Optional[str] = None,\n",
" **kwargs,\n", " input: Optional[str] = None,\n",
" )\n", " **kwargs: Any,\n",
"\n", " ) -> dict:\n",
" response_text = result[\"text\"]\n", " result = self.eval_chain(\n",
" reasoning, preference = response_text.split(\"Preference:\", maxsplit=1)\n", " {\n",
" preference = preference.strip()\n", " \"input\": input,\n",
" score = 1.0 if preference == \"A\" else (0.0 if preference == \"B\" else None)\n", " \"prediction\": prediction,\n",
" return {\"reasoning\": reasoning.strip(), \"value\": preference, \"score\": score}" " \"prediction_b\": prediction_b,\n",
] " \"stop\": [\"Which option is preferred?\"],\n",
}, " },\n",
{ " **kwargs,\n",
"cell_type": "code", " )\n",
"execution_count": 6, "\n",
"id": "5cbd8b1d-2cb0-4f05-b435-a1a00074d94a", " response_text = result[\"text\"]\n",
"metadata": { " reasoning, preference = response_text.split(\"Preference:\", maxsplit=1)\n",
"tags": [] " preference = preference.strip()\n",
}, " score = 1.0 if preference == \"A\" else (0.0 if preference == \"B\" else None)\n",
"outputs": [], " return {\"reasoning\": reasoning.strip(), \"value\": preference, \"score\": score}"
"source": [ ]
"evaluator = CustomPreferenceEvaluator()" },
] {
}, "cell_type": "code",
{ "execution_count": 6,
"cell_type": "code", "id": "5cbd8b1d-2cb0-4f05-b435-a1a00074d94a",
"execution_count": 7, "metadata": {
"id": "2c0a7fb7-b976-4443-9f0e-e707a6dfbdf7", "tags": []
"metadata": { },
"tags": [] "outputs": [],
}, "source": [
"outputs": [ "evaluator = CustomPreferenceEvaluator()"
{ ]
"data": { },
"text/plain": [ {
"{'reasoning': 'Option B is preferred over option A for importing from a relative directory, because it is more straightforward and concise.\\n\\nOption A uses the importlib module, which allows importing a module by specifying the full name as a string. While this works, it is less clear compared to option B.\\n\\nOption B directly imports from the relative path using dot notation, which clearly shows that it is a relative import. This is the recommended way to do relative imports in Python.\\n\\nIn summary, option B is more accurate and helpful as it uses the standard Python relative import syntax.',\n", "cell_type": "code",
" 'value': 'B',\n", "execution_count": 7,
" 'score': 0.0}" "id": "2c0a7fb7-b976-4443-9f0e-e707a6dfbdf7",
] "metadata": {
}, "tags": []
"execution_count": 7, },
"metadata": {}, "outputs": [
"output_type": "execute_result" {
} "data": {
], "text/plain": [
"source": [ "{'reasoning': 'Option B is preferred over option A for importing from a relative directory, because it is more straightforward and concise.\\n\\nOption A uses the importlib module, which allows importing a module by specifying the full name as a string. While this works, it is less clear compared to option B.\\n\\nOption B directly imports from the relative path using dot notation, which clearly shows that it is a relative import. This is the recommended way to do relative imports in Python.\\n\\nIn summary, option B is more accurate and helpful as it uses the standard Python relative import syntax.',\n",
"evaluator.evaluate_string_pairs(\n", " 'value': 'B',\n",
" input=\"How do I import from a relative directory?\",\n", " 'score': 0.0}"
" prediction=\"use importlib! importlib.import_module('.my_package', '.')\",\n", ]
" prediction_b=\"from .sibling import foo\",\n", },
")" "execution_count": 7,
] "metadata": {},
}, "output_type": "execute_result"
{ }
"cell_type": "code", ],
"execution_count": 13, "source": [
"id": "f13a1346-7dbe-451d-b3a3-99e8fc7b753b", "evaluator.evaluate_string_pairs(\n",
"metadata": { " input=\"How do I import from a relative directory?\",\n",
"tags": [] " prediction=\"use importlib! importlib.import_module('.my_package', '.')\",\n",
}, " prediction_b=\"from .sibling import foo\",\n",
"outputs": [ ")"
{ ]
"name": "stdout", },
"output_type": "stream", {
"text": [ "cell_type": "code",
"CustomPreferenceEvaluator requires an input string.\n" "execution_count": 13,
] "id": "f13a1346-7dbe-451d-b3a3-99e8fc7b753b",
} "metadata": {
], "tags": []
"source": [ },
"# Setting requires_input to return True adds additional validation to avoid returning a grade when insufficient data is provided to the chain.\n", "outputs": [
"\n", {
"try:\n", "name": "stdout",
" evaluator.evaluate_string_pairs(\n", "output_type": "stream",
" prediction=\"use importlib! importlib.import_module('.my_package', '.')\",\n", "text": [
" prediction_b=\"from .sibling import foo\",\n", "CustomPreferenceEvaluator requires an input string.\n"
" )\n", ]
"except ValueError as e:\n", }
" print(e)" ],
] "source": [
}, "# Setting requires_input to return True adds additional validation to avoid returning a grade when insufficient data is provided to the chain.\n",
{ "\n",
"cell_type": "code", "try:\n",
"execution_count": null, " evaluator.evaluate_string_pairs(\n",
"id": "e7829cc3-ebd1-4628-ae97-15166202e9cc", " prediction=\"use importlib! importlib.import_module('.my_package', '.')\",\n",
"metadata": {}, " prediction_b=\"from .sibling import foo\",\n",
"outputs": [], " )\n",
"source": [] "except ValueError as e:\n",
} " print(e)"
], ]
"metadata": { },
"kernelspec": { {
"display_name": "Python 3 (ipykernel)", "cell_type": "code",
"language": "python", "execution_count": null,
"name": "python3" "id": "e7829cc3-ebd1-4628-ae97-15166202e9cc",
}, "metadata": {},
"language_info": { "outputs": [],
"codemirror_mode": { "source": []
"name": "ipython", }
"version": 3 ],
}, "metadata": {
"file_extension": ".py", "kernelspec": {
"mimetype": "text/x-python", "display_name": "Python 3 (ipykernel)",
"name": "python", "language": "python",
"nbconvert_exporter": "python", "name": "python3"
"pygments_lexer": "ipython3", },
"version": "3.11.2" "language_info": {
} "codemirror_mode": {
}, "name": "ipython",
"nbformat": 4, "version": 3
"nbformat_minor": 5 },
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
} }

View File

@ -1,233 +1,242 @@
{ {
"cells": [ "cells": [
{ {
"attachments": {}, "cell_type": "raw",
"cell_type": "markdown", "metadata": {},
"metadata": { "source": [
"tags": [] "---\n",
}, "sidebar_position: 1\n",
"source": [ "title: Pairwise embedding distance\n",
"# Pairwise Embedding Distance \n", "---"
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/guides/evaluation/comparison/pairwise_embedding_distance.ipynb)\n", ]
"\n", },
"One way to measure the similarity (or dissimilarity) between two predictions on a shared or similar input is to embed the predictions and compute a vector distance between the two embeddings.<a name=\"cite_ref-1\"></a>[<sup>[1]</sup>](#cite_note-1)\n", {
"\n", "attachments": {},
"You can load the `pairwise_embedding_distance` evaluator to do this.\n", "cell_type": "markdown",
"\n", "metadata": {
"**Note:** This returns a **distance** score, meaning that the lower the number, the **more** similar the outputs are, according to their embedded representation.\n", "tags": []
"\n", },
"Check out the reference docs for the [PairwiseEmbeddingDistanceEvalChain](https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.embedding_distance.base.PairwiseEmbeddingDistanceEvalChain.html#langchain.evaluation.embedding_distance.base.PairwiseEmbeddingDistanceEvalChain) for more info." "source": [
] "[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/guides/evaluation/comparison/pairwise_embedding_distance.ipynb)\n",
}, "\n",
{ "One way to measure the similarity (or dissimilarity) between two predictions on a shared or similar input is to embed the predictions and compute a vector distance between the two embeddings.<a name=\"cite_ref-1\"></a>[<sup>[1]</sup>](#cite_note-1)\n",
"cell_type": "code", "\n",
"execution_count": 1, "You can load the `pairwise_embedding_distance` evaluator to do this.\n",
"metadata": { "\n",
"tags": [] "**Note:** This returns a **distance** score, meaning that the lower the number, the **more** similar the outputs are, according to their embedded representation.\n",
}, "\n",
"outputs": [], "Check out the reference docs for the [PairwiseEmbeddingDistanceEvalChain](https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.embedding_distance.base.PairwiseEmbeddingDistanceEvalChain.html#langchain.evaluation.embedding_distance.base.PairwiseEmbeddingDistanceEvalChain) for more info."
"source": [ ]
"from langchain.evaluation import load_evaluator\n", },
"\n", {
"evaluator = load_evaluator(\"pairwise_embedding_distance\")" "cell_type": "code",
] "execution_count": 1,
}, "metadata": {
{ "tags": []
"cell_type": "code", },
"execution_count": 2, "outputs": [],
"metadata": { "source": [
"tags": [] "from langchain.evaluation import load_evaluator\n",
}, "\n",
"outputs": [ "evaluator = load_evaluator(\"pairwise_embedding_distance\")"
{ ]
"data": { },
"text/plain": [ {
"{'score': 0.0966466944859925}" "cell_type": "code",
] "execution_count": 2,
}, "metadata": {
"execution_count": 2, "tags": []
"metadata": {}, },
"output_type": "execute_result" "outputs": [
} {
], "data": {
"source": [ "text/plain": [
"evaluator.evaluate_string_pairs(\n", "{'score': 0.0966466944859925}"
" prediction=\"Seattle is hot in June\", prediction_b=\"Seattle is cool in June.\"\n", ]
")" },
] "execution_count": 2,
}, "metadata": {},
{ "output_type": "execute_result"
"cell_type": "code", }
"execution_count": 3, ],
"metadata": { "source": [
"tags": [] "evaluator.evaluate_string_pairs(\n",
}, " prediction=\"Seattle is hot in June\", prediction_b=\"Seattle is cool in June.\"\n",
"outputs": [ ")"
{ ]
"data": { },
"text/plain": [ {
"{'score': 0.03761174337464557}" "cell_type": "code",
] "execution_count": 3,
}, "metadata": {
"execution_count": 3, "tags": []
"metadata": {}, },
"output_type": "execute_result" "outputs": [
} {
], "data": {
"source": [ "text/plain": [
"evaluator.evaluate_string_pairs(\n", "{'score': 0.03761174337464557}"
" prediction=\"Seattle is warm in June\", prediction_b=\"Seattle is cool in June.\"\n", ]
")" },
] "execution_count": 3,
}, "metadata": {},
{ "output_type": "execute_result"
"cell_type": "markdown", }
"metadata": {}, ],
"source": [ "source": [
"## Select the Distance Metric\n", "evaluator.evaluate_string_pairs(\n",
"\n", " prediction=\"Seattle is warm in June\", prediction_b=\"Seattle is cool in June.\"\n",
"By default, the evaluator uses cosine distance. You can choose a different distance metric if you'd like. " ")"
] ]
}, },
{ {
"cell_type": "code", "cell_type": "markdown",
"execution_count": 4, "metadata": {},
"metadata": { "source": [
"tags": [] "## Select the Distance Metric\n",
}, "\n",
"outputs": [ "By default, the evaluator uses cosine distance. You can choose a different distance metric if you'd like. "
{ ]
"data": { },
"text/plain": [ {
"[<EmbeddingDistance.COSINE: 'cosine'>,\n", "cell_type": "code",
" <EmbeddingDistance.EUCLIDEAN: 'euclidean'>,\n", "execution_count": 4,
" <EmbeddingDistance.MANHATTAN: 'manhattan'>,\n", "metadata": {
" <EmbeddingDistance.CHEBYSHEV: 'chebyshev'>,\n", "tags": []
" <EmbeddingDistance.HAMMING: 'hamming'>]" },
] "outputs": [
}, {
"execution_count": 4, "data": {
"metadata": {}, "text/plain": [
"output_type": "execute_result" "[<EmbeddingDistance.COSINE: 'cosine'>,\n",
} " <EmbeddingDistance.EUCLIDEAN: 'euclidean'>,\n",
], " <EmbeddingDistance.MANHATTAN: 'manhattan'>,\n",
"source": [ " <EmbeddingDistance.CHEBYSHEV: 'chebyshev'>,\n",
"from langchain.evaluation import EmbeddingDistance\n", " <EmbeddingDistance.HAMMING: 'hamming'>]"
"\n", ]
"list(EmbeddingDistance)" },
] "execution_count": 4,
}, "metadata": {},
{ "output_type": "execute_result"
"cell_type": "code", }
"execution_count": 5, ],
"metadata": { "source": [
"tags": [] "from langchain.evaluation import EmbeddingDistance\n",
}, "\n",
"outputs": [], "list(EmbeddingDistance)"
"source": [ ]
"evaluator = load_evaluator(\n", },
" \"pairwise_embedding_distance\", distance_metric=EmbeddingDistance.EUCLIDEAN\n", {
")" "cell_type": "code",
] "execution_count": 5,
}, "metadata": {
{ "tags": []
"cell_type": "markdown", },
"metadata": {}, "outputs": [],
"source": [ "source": [
"## Select Embeddings to Use\n", "evaluator = load_evaluator(\n",
"\n", " \"pairwise_embedding_distance\", distance_metric=EmbeddingDistance.EUCLIDEAN\n",
"The constructor uses `OpenAI` embeddings by default, but you can configure this however you want. Below, use huggingface local embeddings" ")"
] ]
}, },
{ {
"cell_type": "code", "cell_type": "markdown",
"execution_count": null, "metadata": {},
"metadata": { "source": [
"tags": [] "## Select Embeddings to Use\n",
}, "\n",
"outputs": [], "The constructor uses `OpenAI` embeddings by default, but you can configure this however you want. Below, use huggingface local embeddings"
"source": [ ]
"from langchain.embeddings import HuggingFaceEmbeddings\n", },
"\n", {
"embedding_model = HuggingFaceEmbeddings()\n", "cell_type": "code",
"hf_evaluator = load_evaluator(\"pairwise_embedding_distance\", embeddings=embedding_model)" "execution_count": null,
] "metadata": {
}, "tags": []
{ },
"cell_type": "code", "outputs": [],
"execution_count": 10, "source": [
"metadata": { "from langchain.embeddings import HuggingFaceEmbeddings\n",
"tags": [] "\n",
}, "embedding_model = HuggingFaceEmbeddings()\n",
"outputs": [ "hf_evaluator = load_evaluator(\"pairwise_embedding_distance\", embeddings=embedding_model)"
{ ]
"data": { },
"text/plain": [ {
"{'score': 0.5486443280477362}" "cell_type": "code",
] "execution_count": 10,
}, "metadata": {
"execution_count": 10, "tags": []
"metadata": {}, },
"output_type": "execute_result" "outputs": [
} {
], "data": {
"source": [ "text/plain": [
"hf_evaluator.evaluate_string_pairs(\n", "{'score': 0.5486443280477362}"
" prediction=\"Seattle is hot in June\", prediction_b=\"Seattle is cool in June.\"\n", ]
")" },
] "execution_count": 10,
}, "metadata": {},
{ "output_type": "execute_result"
"cell_type": "code", }
"execution_count": 12, ],
"metadata": { "source": [
"tags": [] "hf_evaluator.evaluate_string_pairs(\n",
}, " prediction=\"Seattle is hot in June\", prediction_b=\"Seattle is cool in June.\"\n",
"outputs": [ ")"
{ ]
"data": { },
"text/plain": [ {
"{'score': 0.21018880025138598}" "cell_type": "code",
] "execution_count": 12,
}, "metadata": {
"execution_count": 12, "tags": []
"metadata": {}, },
"output_type": "execute_result" "outputs": [
} {
], "data": {
"source": [ "text/plain": [
"hf_evaluator.evaluate_string_pairs(\n", "{'score': 0.21018880025138598}"
" prediction=\"Seattle is warm in June\", prediction_b=\"Seattle is cool in June.\"\n", ]
")" },
] "execution_count": 12,
}, "metadata": {},
{ "output_type": "execute_result"
"cell_type": "markdown", }
"metadata": {}, ],
"source": [ "source": [
"<a name=\"cite_note-1\"></a><i>1. Note: When it comes to semantic similarity, this often gives better results than older string distance metrics (such as those in the `PairwiseStringDistanceEvalChain`), though it tends to be less reliable than evaluators that use the LLM directly (such as the `PairwiseStringEvalChain`) </i>" "hf_evaluator.evaluate_string_pairs(\n",
] " prediction=\"Seattle is warm in June\", prediction_b=\"Seattle is cool in June.\"\n",
} ")"
], ]
"metadata": { },
"kernelspec": { {
"display_name": "Python 3 (ipykernel)", "cell_type": "markdown",
"language": "python", "metadata": {},
"name": "python3" "source": [
}, "<a name=\"cite_note-1\"></a><i>1. Note: When it comes to semantic similarity, this often gives better results than older string distance metrics (such as those in the `PairwiseStringDistanceEvalChain`), though it tends to be less reliable than evaluators that use the LLM directly (such as the `PairwiseStringEvalChain`) </i>"
"language_info": { ]
"codemirror_mode": { }
"name": "ipython", ],
"version": 3 "metadata": {
}, "kernelspec": {
"file_extension": ".py", "display_name": "Python 3 (ipykernel)",
"mimetype": "text/x-python", "language": "python",
"name": "python", "name": "python3"
"nbconvert_exporter": "python", },
"pygments_lexer": "ipython3", "language_info": {
"version": "3.11.2" "codemirror_mode": {
} "name": "ipython",
}, "version": 3
"nbformat": 4, },
"nbformat_minor": 4 "file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 4
} }

View File

@ -1,382 +1,392 @@
{ {
"cells": [ "cells": [
{ {
"cell_type": "markdown", "cell_type": "raw",
"id": "2da95378", "id": "dcfcf124-78fe-4d67-85a4-cfd3409a1ff6",
"metadata": {}, "metadata": {},
"source": [ "source": [
"# Pairwise String Comparison\n", "---\n",
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/guides/evaluation/comparison/pairwise_string.ipynb)\n", "sidebar_position: 0\n",
"\n", "title: Pairwise string comparison\n",
"Often you will want to compare predictions of an LLM, Chain, or Agent for a given input. The `StringComparison` evaluators facilitate this so you can answer questions like:\n", "---"
"\n", ]
"- Which LLM or prompt produces a preferred output for a given question?\n", },
"- Which examples should I include for few-shot example selection?\n", {
"- Which output is better to include for fine-tuning?\n", "cell_type": "markdown",
"\n", "id": "2da95378",
"The simplest and often most reliable automated way to choose a preferred prediction for a given input is to use the `pairwise_string` evaluator.\n", "metadata": {},
"\n", "source": [
"Check out the reference docs for the [PairwiseStringEvalChain](https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.comparison.eval_chain.PairwiseStringEvalChain.html#langchain.evaluation.comparison.eval_chain.PairwiseStringEvalChain) for more info." "[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/guides/evaluation/comparison/pairwise_string.ipynb)\n",
] "\n",
}, "Often you will want to compare predictions of an LLM, Chain, or Agent for a given input. The `StringComparison` evaluators facilitate this so you can answer questions like:\n",
{ "\n",
"cell_type": "code", "- Which LLM or prompt produces a preferred output for a given question?\n",
"execution_count": 1, "- Which examples should I include for few-shot example selection?\n",
"id": "f6790c46", "- Which output is better to include for fine-tuning?\n",
"metadata": { "\n",
"tags": [] "The simplest and often most reliable automated way to choose a preferred prediction for a given input is to use the `pairwise_string` evaluator.\n",
}, "\n",
"outputs": [], "Check out the reference docs for the [PairwiseStringEvalChain](https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.comparison.eval_chain.PairwiseStringEvalChain.html#langchain.evaluation.comparison.eval_chain.PairwiseStringEvalChain) for more info."
"source": [ ]
"from langchain.evaluation import load_evaluator\n", },
"\n", {
"evaluator = load_evaluator(\"labeled_pairwise_string\")" "cell_type": "code",
] "execution_count": 1,
}, "id": "f6790c46",
{ "metadata": {
"cell_type": "code", "tags": []
"execution_count": 2, },
"id": "49ad9139", "outputs": [],
"metadata": { "source": [
"tags": [] "from langchain.evaluation import load_evaluator\n",
}, "\n",
"outputs": [ "evaluator = load_evaluator(\"labeled_pairwise_string\")"
{ ]
"data": { },
"text/plain": [ {
"{'reasoning': 'Both responses are relevant to the question asked, as they both provide a numerical answer to the question about the number of dogs in the park. However, Response A is incorrect according to the reference answer, which states that there are four dogs. Response B, on the other hand, is correct as it matches the reference answer. Neither response demonstrates depth of thought, as they both simply provide a numerical answer without any additional information or context. \\n\\nBased on these criteria, Response B is the better response.\\n',\n", "cell_type": "code",
" 'value': 'B',\n", "execution_count": 2,
" 'score': 0}" "id": "49ad9139",
] "metadata": {
}, "tags": []
"execution_count": 2, },
"metadata": {}, "outputs": [
"output_type": "execute_result" {
} "data": {
], "text/plain": [
"source": [ "{'reasoning': 'Both responses are relevant to the question asked, as they both provide a numerical answer to the question about the number of dogs in the park. However, Response A is incorrect according to the reference answer, which states that there are four dogs. Response B, on the other hand, is correct as it matches the reference answer. Neither response demonstrates depth of thought, as they both simply provide a numerical answer without any additional information or context. \\n\\nBased on these criteria, Response B is the better response.\\n',\n",
"evaluator.evaluate_string_pairs(\n", " 'value': 'B',\n",
" prediction=\"there are three dogs\",\n", " 'score': 0}"
" prediction_b=\"4\",\n", ]
" input=\"how many dogs are in the park?\",\n", },
" reference=\"four\",\n", "execution_count": 2,
")" "metadata": {},
] "output_type": "execute_result"
}, }
{ ],
"cell_type": "markdown", "source": [
"id": "7491d2e6-4e77-4b17-be6b-7da966785c1d", "evaluator.evaluate_string_pairs(\n",
"metadata": {}, " prediction=\"there are three dogs\",\n",
"source": [ " prediction_b=\"4\",\n",
"## Methods\n", " input=\"how many dogs are in the park?\",\n",
"\n", " reference=\"four\",\n",
"\n", ")"
"The pairwise string evaluator can be called using [evaluate_string_pairs](https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.comparison.eval_chain.PairwiseStringEvalChain.html#langchain.evaluation.comparison.eval_chain.PairwiseStringEvalChain.evaluate_string_pairs) (or async [aevaluate_string_pairs](https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.comparison.eval_chain.PairwiseStringEvalChain.html#langchain.evaluation.comparison.eval_chain.PairwiseStringEvalChain.aevaluate_string_pairs)) methods, which accept:\n", ]
"\n", },
"- prediction (str) The predicted response of the first model, chain, or prompt.\n", {
"- prediction_b (str) The predicted response of the second model, chain, or prompt.\n", "cell_type": "markdown",
"- input (str) The input question, prompt, or other text.\n", "id": "7491d2e6-4e77-4b17-be6b-7da966785c1d",
"- reference (str) (Only for the labeled_pairwise_string variant) The reference response.\n", "metadata": {},
"\n", "source": [
"They return a dictionary with the following values:\n", "## Methods\n",
"- value: 'A' or 'B', indicating whether `prediction` or `prediction_b` is preferred, respectively\n", "\n",
"- score: Integer 0 or 1 mapped from the 'value', where a score of 1 would mean that the first `prediction` is preferred, and a score of 0 would mean `prediction_b` is preferred.\n", "\n",
"- reasoning: String \"chain of thought reasoning\" from the LLM generated prior to creating the score" "The pairwise string evaluator can be called using [evaluate_string_pairs](https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.comparison.eval_chain.PairwiseStringEvalChain.html#langchain.evaluation.comparison.eval_chain.PairwiseStringEvalChain.evaluate_string_pairs) (or async [aevaluate_string_pairs](https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.comparison.eval_chain.PairwiseStringEvalChain.html#langchain.evaluation.comparison.eval_chain.PairwiseStringEvalChain.aevaluate_string_pairs)) methods, which accept:\n",
] "\n",
}, "- prediction (str) The predicted response of the first model, chain, or prompt.\n",
{ "- prediction_b (str) The predicted response of the second model, chain, or prompt.\n",
"cell_type": "markdown", "- input (str) The input question, prompt, or other text.\n",
"id": "ed353b93-be71-4479-b9c0-8c97814c2e58", "- reference (str) (Only for the labeled_pairwise_string variant) The reference response.\n",
"metadata": {}, "\n",
"source": [ "They return a dictionary with the following values:\n",
"## Without References\n", "- value: 'A' or 'B', indicating whether `prediction` or `prediction_b` is preferred, respectively\n",
"\n", "- score: Integer 0 or 1 mapped from the 'value', where a score of 1 would mean that the first `prediction` is preferred, and a score of 0 would mean `prediction_b` is preferred.\n",
"When references aren't available, you can still predict the preferred response.\n", "- reasoning: String \"chain of thought reasoning\" from the LLM generated prior to creating the score"
"The results will reflect the evaluation model's preference, which is less reliable and may result\n", ]
"in preferences that are factually incorrect." },
] {
}, "cell_type": "markdown",
{ "id": "ed353b93-be71-4479-b9c0-8c97814c2e58",
"cell_type": "code", "metadata": {},
"execution_count": 3, "source": [
"id": "586320da", "## Without References\n",
"metadata": { "\n",
"tags": [] "When references aren't available, you can still predict the preferred response.\n",
}, "The results will reflect the evaluation model's preference, which is less reliable and may result\n",
"outputs": [], "in preferences that are factually incorrect."
"source": [ ]
"from langchain.evaluation import load_evaluator\n", },
"\n", {
"evaluator = load_evaluator(\"pairwise_string\")" "cell_type": "code",
] "execution_count": 3,
}, "id": "586320da",
{ "metadata": {
"cell_type": "code", "tags": []
"execution_count": 4, },
"id": "7f56c76e-a39b-4509-8b8a-8a2afe6c3da1", "outputs": [],
"metadata": { "source": [
"tags": [] "from langchain.evaluation import load_evaluator\n",
}, "\n",
"outputs": [ "evaluator = load_evaluator(\"pairwise_string\")"
{ ]
"data": { },
"text/plain": [ {
"{'reasoning': 'Both responses are correct and relevant to the question. However, Response B is more helpful and insightful as it provides a more detailed explanation of what addition is. Response A is correct but lacks depth as it does not explain what the operation of addition entails. \\n\\nFinal Decision: [[B]]',\n", "cell_type": "code",
" 'value': 'B',\n", "execution_count": 4,
" 'score': 0}" "id": "7f56c76e-a39b-4509-8b8a-8a2afe6c3da1",
] "metadata": {
}, "tags": []
"execution_count": 4, },
"metadata": {}, "outputs": [
"output_type": "execute_result" {
} "data": {
], "text/plain": [
"source": [ "{'reasoning': 'Both responses are correct and relevant to the question. However, Response B is more helpful and insightful as it provides a more detailed explanation of what addition is. Response A is correct but lacks depth as it does not explain what the operation of addition entails. \\n\\nFinal Decision: [[B]]',\n",
"evaluator.evaluate_string_pairs(\n", " 'value': 'B',\n",
" prediction=\"Addition is a mathematical operation.\",\n", " 'score': 0}"
" prediction_b=\"Addition is a mathematical operation that adds two numbers to create a third number, the 'sum'.\",\n", ]
" input=\"What is addition?\",\n", },
")" "execution_count": 4,
] "metadata": {},
}, "output_type": "execute_result"
{ }
"cell_type": "markdown", ],
"id": "4a09b21d-9851-47e8-93d3-90044b2945b0", "source": [
"metadata": { "evaluator.evaluate_string_pairs(\n",
"tags": [] " prediction=\"Addition is a mathematical operation.\",\n",
}, " prediction_b=\"Addition is a mathematical operation that adds two numbers to create a third number, the 'sum'.\",\n",
"source": [ " input=\"What is addition?\",\n",
"## Defining the Criteria\n", ")"
"\n", ]
"By default, the LLM is instructed to select the 'preferred' response based on helpfulness, relevance, correctness, and depth of thought. You can customize the criteria by passing in a `criteria` argument, where the criteria could take any of the following forms:\n", },
"- [`Criteria`](https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.criteria.eval_chain.Criteria.html#langchain.evaluation.criteria.eval_chain.Criteria) enum or its string value - to use one of the default criteria and their descriptions\n", {
"- [Constitutional principal](https://api.python.langchain.com/en/latest/chains/langchain.chains.constitutional_ai.models.ConstitutionalPrinciple.html#langchain.chains.constitutional_ai.models.ConstitutionalPrinciple) - use one any of the constitutional principles defined in langchain\n", "cell_type": "markdown",
"- Dictionary: a list of custom criteria, where the key is the name of the criteria, and the value is the description.\n", "id": "4a09b21d-9851-47e8-93d3-90044b2945b0",
"- A list of criteria or constitutional principles - to combine multiple criteria in one.\n", "metadata": {
"\n", "tags": []
"Below is an example for determining preferred writing responses based on a custom style." },
] "source": [
}, "## Defining the Criteria\n",
{ "\n",
"cell_type": "code", "By default, the LLM is instructed to select the 'preferred' response based on helpfulness, relevance, correctness, and depth of thought. You can customize the criteria by passing in a `criteria` argument, where the criteria could take any of the following forms:\n",
"execution_count": 5, "- [`Criteria`](https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.criteria.eval_chain.Criteria.html#langchain.evaluation.criteria.eval_chain.Criteria) enum or its string value - to use one of the default criteria and their descriptions\n",
"id": "8539e7d9-f7b0-4d32-9c45-593a7915c093", "- [Constitutional principal](https://api.python.langchain.com/en/latest/chains/langchain.chains.constitutional_ai.models.ConstitutionalPrinciple.html#langchain.chains.constitutional_ai.models.ConstitutionalPrinciple) - use one any of the constitutional principles defined in langchain\n",
"metadata": { "- Dictionary: a list of custom criteria, where the key is the name of the criteria, and the value is the description.\n",
"tags": [] "- A list of criteria or constitutional principles - to combine multiple criteria in one.\n",
}, "\n",
"outputs": [], "Below is an example for determining preferred writing responses based on a custom style."
"source": [ ]
"custom_criteria = {\n", },
" \"simplicity\": \"Is the language straightforward and unpretentious?\",\n", {
" \"clarity\": \"Are the sentences clear and easy to understand?\",\n", "cell_type": "code",
" \"precision\": \"Is the writing precise, with no unnecessary words or details?\",\n", "execution_count": 5,
" \"truthfulness\": \"Does the writing feel honest and sincere?\",\n", "id": "8539e7d9-f7b0-4d32-9c45-593a7915c093",
" \"subtext\": \"Does the writing suggest deeper meanings or themes?\",\n", "metadata": {
"}\n", "tags": []
"evaluator = load_evaluator(\"pairwise_string\", criteria=custom_criteria)" },
] "outputs": [],
}, "source": [
{ "custom_criteria = {\n",
"cell_type": "code", " \"simplicity\": \"Is the language straightforward and unpretentious?\",\n",
"execution_count": 6, " \"clarity\": \"Are the sentences clear and easy to understand?\",\n",
"id": "fec7bde8-fbdc-4730-8366-9d90d033c181", " \"precision\": \"Is the writing precise, with no unnecessary words or details?\",\n",
"metadata": { " \"truthfulness\": \"Does the writing feel honest and sincere?\",\n",
"tags": [] " \"subtext\": \"Does the writing suggest deeper meanings or themes?\",\n",
}, "}\n",
"outputs": [ "evaluator = load_evaluator(\"pairwise_string\", criteria=custom_criteria)"
{ ]
"data": { },
"text/plain": [ {
"{'reasoning': 'Response A is simple, clear, and precise. It uses straightforward language to convey a deep and sincere message about families. The metaphor of joy and sorrow as music is effective and easy to understand.\\n\\nResponse B, on the other hand, is more complex and less clear. The language is more pretentious, with words like \"domicile,\" \"resounds,\" \"abode,\" \"dissonant,\" and \"elegy.\" While it conveys a similar message to Response A, it does so in a more convoluted way. The precision is also lacking due to the use of unnecessary words and details.\\n\\nBoth responses suggest deeper meanings or themes about the shared joy and unique sorrow in families. However, Response A does so in a more effective and accessible way.\\n\\nTherefore, the better response is [[A]].',\n", "cell_type": "code",
" 'value': 'A',\n", "execution_count": 6,
" 'score': 1}" "id": "fec7bde8-fbdc-4730-8366-9d90d033c181",
] "metadata": {
}, "tags": []
"execution_count": 6, },
"metadata": {}, "outputs": [
"output_type": "execute_result" {
} "data": {
], "text/plain": [
"source": [ "{'reasoning': 'Response A is simple, clear, and precise. It uses straightforward language to convey a deep and sincere message about families. The metaphor of joy and sorrow as music is effective and easy to understand.\\n\\nResponse B, on the other hand, is more complex and less clear. The language is more pretentious, with words like \"domicile,\" \"resounds,\" \"abode,\" \"dissonant,\" and \"elegy.\" While it conveys a similar message to Response A, it does so in a more convoluted way. The precision is also lacking due to the use of unnecessary words and details.\\n\\nBoth responses suggest deeper meanings or themes about the shared joy and unique sorrow in families. However, Response A does so in a more effective and accessible way.\\n\\nTherefore, the better response is [[A]].',\n",
"evaluator.evaluate_string_pairs(\n", " 'value': 'A',\n",
" prediction=\"Every cheerful household shares a similar rhythm of joy; but sorrow, in each household, plays a unique, haunting melody.\",\n", " 'score': 1}"
" prediction_b=\"Where one finds a symphony of joy, every domicile of happiness resounds in harmonious,\"\n", ]
" \" identical notes; yet, every abode of despair conducts a dissonant orchestra, each\"\n", },
" \" playing an elegy of grief that is peculiar and profound to its own existence.\",\n", "execution_count": 6,
" input=\"Write some prose about families.\",\n", "metadata": {},
")" "output_type": "execute_result"
] }
}, ],
{ "source": [
"cell_type": "markdown", "evaluator.evaluate_string_pairs(\n",
"id": "a25b60b2-627c-408a-be4b-a2e5cbc10726", " prediction=\"Every cheerful household shares a similar rhythm of joy; but sorrow, in each household, plays a unique, haunting melody.\",\n",
"metadata": {}, " prediction_b=\"Where one finds a symphony of joy, every domicile of happiness resounds in harmonious,\"\n",
"source": [ " \" identical notes; yet, every abode of despair conducts a dissonant orchestra, each\"\n",
"## Customize the LLM\n", " \" playing an elegy of grief that is peculiar and profound to its own existence.\",\n",
"\n", " input=\"Write some prose about families.\",\n",
"By default, the loader uses `gpt-4` in the evaluation chain. You can customize this when loading." ")"
] ]
}, },
{ {
"cell_type": "code", "cell_type": "markdown",
"execution_count": 7, "id": "a25b60b2-627c-408a-be4b-a2e5cbc10726",
"id": "de84a958-1330-482b-b950-68bcf23f9e35", "metadata": {},
"metadata": {}, "source": [
"outputs": [], "## Customize the LLM\n",
"source": [ "\n",
"from langchain.chat_models import ChatAnthropic\n", "By default, the loader uses `gpt-4` in the evaluation chain. You can customize this when loading."
"\n", ]
"llm = ChatAnthropic(temperature=0)\n", },
"\n", {
"evaluator = load_evaluator(\"labeled_pairwise_string\", llm=llm)" "cell_type": "code",
] "execution_count": 7,
}, "id": "de84a958-1330-482b-b950-68bcf23f9e35",
{ "metadata": {},
"cell_type": "code", "outputs": [],
"execution_count": 8, "source": [
"id": "e162153f-d50a-4a7c-a033-019dabbc954c", "from langchain.chat_models import ChatAnthropic\n",
"metadata": { "\n",
"tags": [] "llm = ChatAnthropic(temperature=0)\n",
}, "\n",
"outputs": [ "evaluator = load_evaluator(\"labeled_pairwise_string\", llm=llm)"
{ ]
"data": { },
"text/plain": [ {
"{'reasoning': 'Here is my assessment:\\n\\nResponse B is more helpful, insightful, and accurate than Response A. Response B simply states \"4\", which directly answers the question by providing the exact number of dogs mentioned in the reference answer. In contrast, Response A states \"there are three dogs\", which is incorrect according to the reference answer. \\n\\nIn terms of helpfulness, Response B gives the precise number while Response A provides an inaccurate guess. For relevance, both refer to dogs in the park from the question. However, Response B is more correct and factual based on the reference answer. Response A shows some attempt at reasoning but is ultimately incorrect. Response B requires less depth of thought to simply state the factual number.\\n\\nIn summary, Response B is superior in terms of helpfulness, relevance, correctness, and depth. My final decision is: [[B]]\\n',\n", "cell_type": "code",
" 'value': 'B',\n", "execution_count": 8,
" 'score': 0}" "id": "e162153f-d50a-4a7c-a033-019dabbc954c",
] "metadata": {
}, "tags": []
"execution_count": 8, },
"metadata": {}, "outputs": [
"output_type": "execute_result" {
} "data": {
], "text/plain": [
"source": [ "{'reasoning': 'Here is my assessment:\\n\\nResponse B is more helpful, insightful, and accurate than Response A. Response B simply states \"4\", which directly answers the question by providing the exact number of dogs mentioned in the reference answer. In contrast, Response A states \"there are three dogs\", which is incorrect according to the reference answer. \\n\\nIn terms of helpfulness, Response B gives the precise number while Response A provides an inaccurate guess. For relevance, both refer to dogs in the park from the question. However, Response B is more correct and factual based on the reference answer. Response A shows some attempt at reasoning but is ultimately incorrect. Response B requires less depth of thought to simply state the factual number.\\n\\nIn summary, Response B is superior in terms of helpfulness, relevance, correctness, and depth. My final decision is: [[B]]\\n',\n",
"evaluator.evaluate_string_pairs(\n", " 'value': 'B',\n",
" prediction=\"there are three dogs\",\n", " 'score': 0}"
" prediction_b=\"4\",\n", ]
" input=\"how many dogs are in the park?\",\n", },
" reference=\"four\",\n", "execution_count": 8,
")" "metadata": {},
] "output_type": "execute_result"
}, }
{ ],
"cell_type": "markdown", "source": [
"id": "e0e89c13-d0ad-4f87-8fcb-814399bafa2a", "evaluator.evaluate_string_pairs(\n",
"metadata": {}, " prediction=\"there are three dogs\",\n",
"source": [ " prediction_b=\"4\",\n",
"## Customize the Evaluation Prompt\n", " input=\"how many dogs are in the park?\",\n",
"\n", " reference=\"four\",\n",
"You can use your own custom evaluation prompt to add more task-specific instructions or to instruct the evaluator to score the output.\n", ")"
"\n", ]
"*Note: If you use a prompt that expects generates a result in a unique format, you may also have to pass in a custom output parser (`output_parser=your_parser()`) instead of the default `PairwiseStringResultOutputParser`" },
] {
}, "cell_type": "markdown",
{ "id": "e0e89c13-d0ad-4f87-8fcb-814399bafa2a",
"cell_type": "code", "metadata": {},
"execution_count": 9, "source": [
"id": "fb817efa-3a4d-439d-af8c-773b89d97ec9", "## Customize the Evaluation Prompt\n",
"metadata": { "\n",
"tags": [] "You can use your own custom evaluation prompt to add more task-specific instructions or to instruct the evaluator to score the output.\n",
}, "\n",
"outputs": [], "*Note: If you use a prompt that expects generates a result in a unique format, you may also have to pass in a custom output parser (`output_parser=your_parser()`) instead of the default `PairwiseStringResultOutputParser`"
"source": [ ]
"from langchain.prompts import PromptTemplate\n", },
"\n", {
"prompt_template = PromptTemplate.from_template(\n", "cell_type": "code",
" \"\"\"Given the input context, which do you prefer: A or B?\n", "execution_count": 9,
"Evaluate based on the following criteria:\n", "id": "fb817efa-3a4d-439d-af8c-773b89d97ec9",
"{criteria}\n", "metadata": {
"Reason step by step and finally, respond with either [[A]] or [[B]] on its own line.\n", "tags": []
"\n", },
"DATA\n", "outputs": [],
"----\n", "source": [
"input: {input}\n", "from langchain.prompts import PromptTemplate\n",
"reference: {reference}\n", "\n",
"A: {prediction}\n", "prompt_template = PromptTemplate.from_template(\n",
"B: {prediction_b}\n", " \"\"\"Given the input context, which do you prefer: A or B?\n",
"---\n", "Evaluate based on the following criteria:\n",
"Reasoning:\n", "{criteria}\n",
"\n", "Reason step by step and finally, respond with either [[A]] or [[B]] on its own line.\n",
"\"\"\"\n", "\n",
")\n", "DATA\n",
"evaluator = load_evaluator(\n", "----\n",
" \"labeled_pairwise_string\", prompt=prompt_template\n", "input: {input}\n",
")" "reference: {reference}\n",
] "A: {prediction}\n",
}, "B: {prediction_b}\n",
{ "---\n",
"cell_type": "code", "Reasoning:\n",
"execution_count": 10, "\n",
"id": "d40aa4f0-cfd5-4cb4-83c8-8d2300a04c2f", "\"\"\"\n",
"metadata": { ")\n",
"tags": [] "evaluator = load_evaluator(\n",
}, " \"labeled_pairwise_string\", prompt=prompt_template\n",
"outputs": [ ")"
{ ]
"name": "stdout", },
"output_type": "stream", {
"text": [ "cell_type": "code",
"input_variables=['prediction', 'reference', 'prediction_b', 'input'] output_parser=None partial_variables={'criteria': 'helpfulness: Is the submission helpful, insightful, and appropriate?\\nrelevance: Is the submission referring to a real quote from the text?\\ncorrectness: Is the submission correct, accurate, and factual?\\ndepth: Does the submission demonstrate depth of thought?'} template='Given the input context, which do you prefer: A or B?\\nEvaluate based on the following criteria:\\n{criteria}\\nReason step by step and finally, respond with either [[A]] or [[B]] on its own line.\\n\\nDATA\\n----\\ninput: {input}\\nreference: {reference}\\nA: {prediction}\\nB: {prediction_b}\\n---\\nReasoning:\\n\\n' template_format='f-string' validate_template=True\n" "execution_count": 10,
] "id": "d40aa4f0-cfd5-4cb4-83c8-8d2300a04c2f",
} "metadata": {
], "tags": []
"source": [ },
"# The prompt was assigned to the evaluator\n", "outputs": [
"print(evaluator.prompt)" {
] "name": "stdout",
}, "output_type": "stream",
{ "text": [
"cell_type": "code", "input_variables=['prediction', 'reference', 'prediction_b', 'input'] output_parser=None partial_variables={'criteria': 'helpfulness: Is the submission helpful, insightful, and appropriate?\\nrelevance: Is the submission referring to a real quote from the text?\\ncorrectness: Is the submission correct, accurate, and factual?\\ndepth: Does the submission demonstrate depth of thought?'} template='Given the input context, which do you prefer: A or B?\\nEvaluate based on the following criteria:\\n{criteria}\\nReason step by step and finally, respond with either [[A]] or [[B]] on its own line.\\n\\nDATA\\n----\\ninput: {input}\\nreference: {reference}\\nA: {prediction}\\nB: {prediction_b}\\n---\\nReasoning:\\n\\n' template_format='f-string' validate_template=True\n"
"execution_count": 11, ]
"id": "9467bb42-7a31-4071-8f66-9ed2c6f06dcd", }
"metadata": { ],
"tags": [] "source": [
}, "# The prompt was assigned to the evaluator\n",
"outputs": [ "print(evaluator.prompt)"
{ ]
"data": { },
"text/plain": [ {
"{'reasoning': 'Helpfulness: Both A and B are helpful as they provide a direct answer to the question.\\nRelevance: A is relevant as it refers to the correct name of the dog from the text. B is not relevant as it provides a different name.\\nCorrectness: A is correct as it accurately states the name of the dog. B is incorrect as it provides a different name.\\nDepth: Both A and B demonstrate a similar level of depth as they both provide a straightforward answer to the question.\\n\\nGiven these evaluations, the preferred response is:\\n',\n", "cell_type": "code",
" 'value': 'A',\n", "execution_count": 11,
" 'score': 1}" "id": "9467bb42-7a31-4071-8f66-9ed2c6f06dcd",
] "metadata": {
}, "tags": []
"execution_count": 11, },
"metadata": {}, "outputs": [
"output_type": "execute_result" {
} "data": {
], "text/plain": [
"source": [ "{'reasoning': 'Helpfulness: Both A and B are helpful as they provide a direct answer to the question.\\nRelevance: A is relevant as it refers to the correct name of the dog from the text. B is not relevant as it provides a different name.\\nCorrectness: A is correct as it accurately states the name of the dog. B is incorrect as it provides a different name.\\nDepth: Both A and B demonstrate a similar level of depth as they both provide a straightforward answer to the question.\\n\\nGiven these evaluations, the preferred response is:\\n',\n",
"evaluator.evaluate_string_pairs(\n", " 'value': 'A',\n",
" prediction=\"The dog that ate the ice cream was named fido.\",\n", " 'score': 1}"
" prediction_b=\"The dog's name is spot\",\n", ]
" input=\"What is the name of the dog that ate the ice cream?\",\n", },
" reference=\"The dog's name is fido\",\n", "execution_count": 11,
")" "metadata": {},
] "output_type": "execute_result"
} }
], ],
"metadata": { "source": [
"kernelspec": { "evaluator.evaluate_string_pairs(\n",
"display_name": "Python 3 (ipykernel)", " prediction=\"The dog that ate the ice cream was named fido.\",\n",
"language": "python", " prediction_b=\"The dog's name is spot\",\n",
"name": "python3" " input=\"What is the name of the dog that ate the ice cream?\",\n",
}, " reference=\"The dog's name is fido\",\n",
"language_info": { ")"
"codemirror_mode": { ]
"name": "ipython", }
"version": 3 ],
}, "metadata": {
"file_extension": ".py", "kernelspec": {
"mimetype": "text/x-python", "display_name": "Python 3 (ipykernel)",
"name": "python", "language": "python",
"nbconvert_exporter": "python", "name": "python3"
"pygments_lexer": "ipython3", },
"version": "3.11.2" "language_info": {
} "codemirror_mode": {
}, "name": "ipython",
"nbformat": 4, "version": 3
"nbformat_minor": 5 },
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
} }