mirror of
https://github.com/hwchase17/langchain
synced 2024-11-18 09:25:54 +00:00
Delete Old Evals Examples (#8252)
Still retain: - Comparison Examples - Data + QA walkthrough - QA (but really minimize it)
This commit is contained in:
parent
db9d5b213a
commit
9eb7e6e27f
File diff suppressed because one or more lines are too long
@ -3,6 +3,22 @@ sidebar_position: 3
|
||||
---
|
||||
# Comparison Evaluators
|
||||
|
||||
Comparison evaluators in LangChain help measure two different chain or LLM outputs. These evaluators are helpful for comparative analyses, such as A/B testing between two language models, or comparing different versions of the same model. They can also be useful for things like generating preference scores for ai-assisted reinforcement learning.
|
||||
|
||||
These evaluators inherit from the `PairwiseStringEvaluator` class, providing a comparison interface for two strings - typically, the outputs from two different prompts or models, or two versions of the same model. In essence, a comparison evaluator performs an evaluation on a pair of strings and returns a dictionary containing the evaluation score and other relevant details.
|
||||
|
||||
To create a custom comparison evaluator, inherit from the `PairwiseStringEvaluator` class and overwrite the `_evaluate_string_pairs` method. If you require asynchronous evaluation, also overwrite the `_aevaluate_string_pairs` method.
|
||||
|
||||
Here's a summary of the key methods and properties of a comparison evaluator:
|
||||
|
||||
- `evaluate_string_pairs`: Evaluate the output string pairs. This function should be overwritten when creating custom evaluators.
|
||||
- `aevaluate_string_pairs`: Asynchronously evaluate the output string pairs. This function should be overwritten for asynchronous evaluation.
|
||||
- `requires_input`: This property indicates whether this evaluator requires an input string.
|
||||
- `requires_reference`: This property specifies whether this evaluator requires a reference label.
|
||||
|
||||
Detailed information about creating custom evaluators and the available built-in comparison evaluators are provided in the following sections.
|
||||
|
||||
import DocCardList from "@theme/DocCardList";
|
||||
|
||||
<DocCardList />
|
||||
|
||||
|
@ -6,23 +6,26 @@ import DocCardList from "@theme/DocCardList";
|
||||
|
||||
# Evaluation
|
||||
|
||||
Language models can be unpredictable. This makes it challenging to ship reliable applications to production, where repeatable, useful outcomes across diverse inputs are a minimum requirement. Tests help demonstrate each component in an LLM application can produce the required or expected functionality. These tests also safeguard against regressions while you improve interconnected pieces of an integrated system. However, measuring the quality of generated text can be challenging. It can be hard to agree on the right set of metrics for your application, and it can be difficult to translate those into better performance. Furthermore, it's common to lack sufficient evaluation data to adequately test the range of inputs and expected outputs for each component when you're just getting started. The LangChain community is building open source tools and guides to help address these challenges.
|
||||
Building applications with language models involves many moving parts. One of the most critical components is ensuring that the outcomes produced by your models are reliable and useful across a broad array of inputs, and that they work well with your application's other software components. Ensuring reliability usually boils down to some combination of application design, testing & evaluation, and runtime checks.
|
||||
|
||||
LangChain exposes different types of evaluators for common types of evaluation. Each type has off-the-shelf implementations you can use to get started, as well as an
|
||||
extensible API so you can create your own or contribute improvements for everyone to use. The following sections have example notebooks for you to get started.
|
||||
The guides in this section review the APIs and functionality LangChain provides to help yous better evaluate your applications. Evaluation and testing are both critical when thinking about deploying LLM applications, since production environments require repeatable and useful outcomes.
|
||||
|
||||
- [String Evaluators](/docs/guides/evaluation/string/): Evaluate the predicted string for a given input, usually against a reference string
|
||||
- [Trajectory Evaluators](/docs/guides/evaluation/trajectory/): Evaluate the whole trajectory of agent actions
|
||||
- [Comparison Evaluators](/docs/guides/evaluation/comparison/): Compare predictions from two runs on a common input
|
||||
LangChain offers various types of evaluators to help you measure performance and integrity on diverse data, and we hope to encourage the the community to create and share other useful evaluators so everyone can improve. These docs will introduce the evaluator types, how to use them, and provide some examples of their use in real-world scenarios.
|
||||
|
||||
Each evaluator type in LangChain comes with ready-to-use implementations and an extensible API that allows for customization according to your unique requirements. Here are some of the types of evaluators we offer:
|
||||
|
||||
This section also provides some additional examples of how you could use these evaluators for different scenarios or apply to different chain implementations in the LangChain library. Some examples include:
|
||||
- [String Evaluators](/docs/guides/evaluation/string/): These evaluators assess the predicted string for a given input, usually comparing it against a reference string.
|
||||
- [Trajectory Evaluators](/docs/guides/evaluation/trajectory/): These are used to evaluate the entire trajectory of agent actions.
|
||||
- [Comparison Evaluators](/docs/guides/evaluation/comparison/): These evaluators are designed to compare predictions from two runs on a common input.
|
||||
|
||||
- [Preference Scoring Chain Outputs](/docs/guides/evaluation/examples/comparisons): An example using a comparison evaluator on different models or prompts to select statistically significant differences in aggregate preference scores
|
||||
These evaluators can be used across various scenarios and can be applied to different chain and LLM implementations in the LangChain library.
|
||||
|
||||
We also are working to share guides and cookbooks that demonstrate how to use these evaluators in real-world scenarios, such as:
|
||||
|
||||
- [Chain Comparisons](/docs/guides/evaluation/examples/comparisons): This example uses a comparison evaluator to predict the preferred output. It reviews ways to measure confidence intervals to select statistically significant differences in aggregate preference scores across different models or prompts.
|
||||
|
||||
## Reference Docs
|
||||
|
||||
For detailed information of the available evaluators, including how to instantiate, configure, and customize them. Check out the [reference documentation](https://api.python.langchain.com/en/latest/api_reference.html#module-langchain.evaluation) directly.
|
||||
For detailed information on the available evaluators, including how to instantiate, configure, and customize them, check out the [reference documentation](https://api.python.langchain.com/en/latest/api_reference.html#module-langchain.evaluation) directly.
|
||||
|
||||
<DocCardList />
|
||||
|
@ -3,6 +3,25 @@ sidebar_position: 2
|
||||
---
|
||||
# String Evaluators
|
||||
|
||||
A string evaluator is a component within LangChain designed to assess the performance of a language model by comparing its generated outputs (predictions) to a reference string or an input. This comparison is a crucial step in the evaluation of language models, providing a measure of the accuracy or quality of the generated text.
|
||||
|
||||
In practice, string evaluators are typically used to evaluate a predicted string against a given input, such as a question or a prompt. Often, a reference label or context string is provided to define what a correct or ideal response would look like. These evaluators can be customized to tailor the evaluation process to fit your application's specific requirements.
|
||||
|
||||
To create a custom string evaluator, inherit from the `StringEvaluator` class and implement the `_evaluate_strings` method. If you require asynchronous support, also implement the `_aevaluate_strings` method.
|
||||
|
||||
Here's a summary of the key attributes and methods associated with a string evaluator:
|
||||
|
||||
- `evaluation_name`: Specifies the name of the evaluation.
|
||||
- `requires_input`: Boolean attribute that indicates whether the evaluator requires an input string. If True, the evaluator will raise an error when the input isn't provided. If False, a warning will be logged if an input _is_ provided, indicating that it will not be considered in the evaluation.
|
||||
- `requires_reference`: Boolean attribute specifying whether the evaluator requires a reference label. If True, the evaluator will raise an error when the reference isn't provided. If False, a warning will be logged if a reference _is_ provided, indicating that it will not be considered in the evaluation.
|
||||
|
||||
String evaluators also implement the following methods:
|
||||
|
||||
- `aevaluate_strings`: Asynchronously evaluates the output of the Chain or Language Model, with support for optional input and label.
|
||||
- `evaluate_strings`: Synchronously evaluates the output of the Chain or Language Model, with support for optional input and label.
|
||||
|
||||
The following sections provide detailed information on available string evaluator implementations as well as how to create a custom string evaluator.
|
||||
|
||||
import DocCardList from "@theme/DocCardList";
|
||||
|
||||
<DocCardList />
|
@ -3,6 +3,26 @@ sidebar_position: 4
|
||||
---
|
||||
# Trajectory Evaluators
|
||||
|
||||
Trajectory Evaluators in LangChain provide a more holistic approach to evaluating an agent. These evaluators assess the full sequence of actions taken by an agent and their corresponding responses, which we refer to as the "trajectory". This allows you to better measure an agent's effectiveness and capabilities.
|
||||
|
||||
A Trajectory Evaluator implements the `AgentTrajectoryEvaluator` interface, which requires two main methods:
|
||||
|
||||
- `evaluate_agent_trajectory`: This method synchronously evaluates an agent's trajectory.
|
||||
- `aevaluate_agent_trajectory`: This asynchronous counterpart allows evaluations to be run in parallel for efficiency.
|
||||
|
||||
Both methods accept three main parameters:
|
||||
|
||||
- `input`: The initial input given to the agent.
|
||||
- `prediction`: The final predicted response from the agent.
|
||||
- `agent_trajectory`: The intermediate steps taken by the agent, given as a list of tuples.
|
||||
|
||||
These methods return a dictionary. It is recommended that custom implementations return a `score` (a float indicating the effectiveness of the agent) and `reasoning` (a string explaining the reasoning behind the score).
|
||||
|
||||
You can capture an agent's trajectory by initializing the agent with the `return_intermediate_steps=True` parameter. This lets you collect all intermediate steps without relying on special callbacks.
|
||||
|
||||
For a deeper dive into the implementation and use of Trajectory Evaluators, refer to the sections below.
|
||||
|
||||
import DocCardList from "@theme/DocCardList";
|
||||
|
||||
<DocCardList />
|
||||
|
||||
|
@ -43,7 +43,7 @@
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"{'reasoning': 'Response A is incorrect as it states there are three dogs in the park, which contradicts the reference answer of four. Response B, on the other hand, is accurate as it matches the reference answer. Although Response B is not as detailed or elaborate as Response A, it is more important that the response is accurate. \\n\\nFinal Decision: [[B]]\\n',\n",
|
||||
"{'reasoning': 'Both responses are relevant to the question asked, as they both provide a numerical answer to the question about the number of dogs in the park. However, Response A is incorrect according to the reference answer, which states that there are four dogs. Response B, on the other hand, is correct as it matches the reference answer. Neither response demonstrates depth of thought, as they both simply provide a numerical answer without any additional information or context. \\n\\nBased on these criteria, Response B is the better response.\\n',\n",
|
||||
" 'value': 'B',\n",
|
||||
" 'score': 0}"
|
||||
]
|
||||
@ -62,6 +62,27 @@
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "7491d2e6-4e77-4b17-be6b-7da966785c1d",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Methods\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"The pairwise string evaluator can be called using [evaluate_string_pairs](https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.comparison.eval_chain.PairwiseStringEvalChain.html#langchain.evaluation.comparison.eval_chain.PairwiseStringEvalChain.evaluate_string_pairs) (or async [aevaluate_string_pairs](https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.comparison.eval_chain.PairwiseStringEvalChain.html#langchain.evaluation.comparison.eval_chain.PairwiseStringEvalChain.aevaluate_string_pairs)) methods, which accept:\n",
|
||||
"\n",
|
||||
"- prediction (str) – The predicted response of the first model, chain, or prompt.\n",
|
||||
"- prediction_b (str) – The predicted response of the second model, chain, or prompt.\n",
|
||||
"- input (str) – The input question, prompt, or other text.\n",
|
||||
"- reference (str) – (Only for the labeled_pairwise_string variant) The reference response.\n",
|
||||
"\n",
|
||||
"They return a dictionary with the following values:\n",
|
||||
"- value: 'A' or 'B', indicating whether `prediction` or `prediction_b` is preferred, respectively\n",
|
||||
"- score: Integer 0 or 1 mapped from the 'value', where a score of 1 would mean that the first `prediction` is preferred, and a score of 0 would mean `prediction_b` is preferred.\n",
|
||||
"- reasoning: String \"chain of thought reasoning\" from the LLM generated prior to creating the score"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "ed353b93-be71-4479-b9c0-8c97814c2e58",
|
||||
@ -99,7 +120,7 @@
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"{'reasoning': \"Response A is accurate but lacks depth and detail. It simply states that addition is a mathematical operation without explaining what it does or how it works. \\n\\nResponse B, on the other hand, provides a more detailed explanation. It not only identifies addition as a mathematical operation, but also explains that it involves adding two numbers to create a third number, the 'sum'. This response is more helpful and informative, providing a clearer understanding of what addition is.\\n\\nTherefore, the better response is B.\\n\",\n",
|
||||
"{'reasoning': 'Both responses are correct and relevant to the question. However, Response B is more helpful and insightful as it provides a more detailed explanation of what addition is. Response A is correct but lacks depth as it does not explain what the operation of addition entails. \\n\\nFinal Decision: [[B]]',\n",
|
||||
" 'value': 'B',\n",
|
||||
" 'score': 0}"
|
||||
]
|
||||
@ -117,6 +138,74 @@
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "4a09b21d-9851-47e8-93d3-90044b2945b0",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"source": [
|
||||
"## Defining the Criteria\n",
|
||||
"\n",
|
||||
"By default, the LLM is instructed to select the 'preferred' response based on helpfulness, relevance, correctness, and depth of thought. You can customize the criteria by passing in a `criteria` argument, where the criteria could take any of the following forms:\n",
|
||||
"- [`Criteria`](https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.criteria.eval_chain.Criteria.html#langchain.evaluation.criteria.eval_chain.Criteria) enum or its string value - to use one of the default criteria and their descriptions\n",
|
||||
"- [Constitutional principal](https://api.python.langchain.com/en/latest/chains/langchain.chains.constitutional_ai.models.ConstitutionalPrinciple.html#langchain.chains.constitutional_ai.models.ConstitutionalPrinciple) - use one any of the constitutional principles defined in langchain\n",
|
||||
"- Dictionary: a list of custom criteria, where the key is the name of the criteria, and the value is the description.\n",
|
||||
"- A list of criteria or constitutional principles - to combine multiple criteria in one.\n",
|
||||
"\n",
|
||||
"Below is an example for determining preferred writing responses based on a custom style."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"id": "8539e7d9-f7b0-4d32-9c45-593a7915c093",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"custom_criteria = {\n",
|
||||
" \"simplicity\": \"Is the language straightforward and unpretentious?\",\n",
|
||||
" \"clarity\": \"Are the sentences clear and easy to understand?\",\n",
|
||||
" \"precision\": \"Is the writing precise, with no unnecessary words or details?\",\n",
|
||||
" \"truthfulness\": \"Does the writing feel honest and sincere?\",\n",
|
||||
" \"subtext\": \"Does the writing suggest deeper meanings or themes?\",\n",
|
||||
"}\n",
|
||||
"evaluator = load_evaluator(\"pairwise_string\", criteria=custom_criteria)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"id": "fec7bde8-fbdc-4730-8366-9d90d033c181",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"{'reasoning': 'Response A is simple, clear, and precise. It uses straightforward language to convey a deep and sincere message about families. The metaphor of joy and sorrow as music is effective and easy to understand.\\n\\nResponse B, on the other hand, is more complex and less clear. The language is more pretentious, with words like \"domicile,\" \"resounds,\" \"abode,\" \"dissonant,\" and \"elegy.\" While it conveys a similar message to Response A, it does so in a more convoluted way. The precision is also lacking due to the use of unnecessary words and details.\\n\\nBoth responses suggest deeper meanings or themes about the shared joy and unique sorrow in families. However, Response A does so in a more effective and accessible way.\\n\\nTherefore, the better response is [[A]].',\n",
|
||||
" 'value': 'A',\n",
|
||||
" 'score': 1}"
|
||||
]
|
||||
},
|
||||
"execution_count": 6,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"evaluator.evaluate_string_pairs(\n",
|
||||
" prediction=\"Every cheerful household shares a similar rhythm of joy; but sorrow, in each household, plays a unique, haunting melody.\",\n",
|
||||
" prediction_b=\"Where one finds a symphony of joy, every domicile of happiness resounds in harmonious,\"\n",
|
||||
" \" identical notes; yet, every abode of despair conducts a dissonant orchestra, each\"\n",
|
||||
" \" playing an elegy of grief that is peculiar and profound to its own existence.\",\n",
|
||||
" input=\"Write some prose about families.\",\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "a25b60b2-627c-408a-be4b-a2e5cbc10726",
|
||||
@ -129,7 +218,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"execution_count": 7,
|
||||
"id": "de84a958-1330-482b-b950-68bcf23f9e35",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
@ -143,7 +232,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"execution_count": 8,
|
||||
"id": "e162153f-d50a-4a7c-a033-019dabbc954c",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
@ -152,12 +241,12 @@
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"{'reasoning': 'Here is my assessment:\\n\\nResponse B is better because it directly answers the question by stating the number \"4\", which matches the ground truth reference answer. Response A provides an incorrect number of dogs, stating there are three dogs when the reference says there are four. \\n\\nResponse B is more helpful, relevant, accurate and provides the right level of detail by simply stating the number that was asked for. Response A provides an inaccurate number, so is less helpful and accurate.\\n\\nIn summary, Response B better followed the instructions and answered the question correctly per the reference answer.\\n\\n[[B]]',\n",
|
||||
"{'reasoning': 'Here is my assessment:\\n\\nResponse B is more helpful, insightful, and accurate than Response A. Response B simply states \"4\", which directly answers the question by providing the exact number of dogs mentioned in the reference answer. In contrast, Response A states \"there are three dogs\", which is incorrect according to the reference answer. \\n\\nIn terms of helpfulness, Response B gives the precise number while Response A provides an inaccurate guess. For relevance, both refer to dogs in the park from the question. However, Response B is more correct and factual based on the reference answer. Response A shows some attempt at reasoning but is ultimately incorrect. Response B requires less depth of thought to simply state the factual number.\\n\\nIn summary, Response B is superior in terms of helpfulness, relevance, correctness, and depth. My final decision is: [[B]]\\n',\n",
|
||||
" 'value': 'B',\n",
|
||||
" 'score': 0}"
|
||||
]
|
||||
},
|
||||
"execution_count": 6,
|
||||
"execution_count": 8,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
@ -185,7 +274,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"execution_count": 9,
|
||||
"id": "fb817efa-3a4d-439d-af8c-773b89d97ec9",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
@ -195,7 +284,9 @@
|
||||
"from langchain.prompts import PromptTemplate\n",
|
||||
"\n",
|
||||
"prompt_template = PromptTemplate.from_template(\n",
|
||||
" \"\"\"Given the input context, which is most similar to the reference label: A or B?\n",
|
||||
" \"\"\"Given the input context, which do you prefer: A or B?\n",
|
||||
"Evaluate based on the following criteria:\n",
|
||||
"{criteria}\n",
|
||||
"Reason step by step and finally, respond with either [[A]] or [[B]] on its own line.\n",
|
||||
"\n",
|
||||
"DATA\n",
|
||||
@ -216,7 +307,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 8,
|
||||
"execution_count": 10,
|
||||
"id": "d40aa4f0-cfd5-4cb4-83c8-8d2300a04c2f",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
@ -226,7 +317,7 @@
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"input_variables=['input', 'prediction', 'prediction_b', 'reference'] output_parser=None partial_variables={} template='Given the input context, which is most similar to the reference label: A or B?\\nReason step by step and finally, respond with either [[A]] or [[B]] on its own line.\\n\\nDATA\\n----\\ninput: {input}\\nreference: {reference}\\nA: {prediction}\\nB: {prediction_b}\\n---\\nReasoning:\\n\\n' template_format='f-string' validate_template=True\n"
|
||||
"input_variables=['prediction', 'reference', 'prediction_b', 'input'] output_parser=None partial_variables={'criteria': 'helpfulness: Is the submission helpful, insightful, and appropriate?\\nrelevance: Is the submission referring to a real quote from the text?\\ncorrectness: Is the submission correct, accurate, and factual?\\ndepth: Does the submission demonstrate depth of thought?'} template='Given the input context, which do you prefer: A or B?\\nEvaluate based on the following criteria:\\n{criteria}\\nReason step by step and finally, respond with either [[A]] or [[B]] on its own line.\\n\\nDATA\\n----\\ninput: {input}\\nreference: {reference}\\nA: {prediction}\\nB: {prediction_b}\\n---\\nReasoning:\\n\\n' template_format='f-string' validate_template=True\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
@ -237,7 +328,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 9,
|
||||
"execution_count": 11,
|
||||
"id": "9467bb42-7a31-4071-8f66-9ed2c6f06dcd",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
@ -246,12 +337,12 @@
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"{'reasoning': 'Option A is more similar to the reference label because it mentions the same dog\\'s name, \"fido\". Option B mentions a different name, \"spot\". Therefore, A is more similar to the reference label. \\n',\n",
|
||||
"{'reasoning': 'Helpfulness: Both A and B are helpful as they provide a direct answer to the question.\\nRelevance: A is relevant as it refers to the correct name of the dog from the text. B is not relevant as it provides a different name.\\nCorrectness: A is correct as it accurately states the name of the dog. B is incorrect as it provides a different name.\\nDepth: Both A and B demonstrate a similar level of depth as they both provide a straightforward answer to the question.\\n\\nGiven these evaluations, the preferred response is:\\n',\n",
|
||||
" 'value': 'A',\n",
|
||||
" 'score': 1}"
|
||||
]
|
||||
},
|
||||
"execution_count": 9,
|
||||
"execution_count": 11,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
|
@ -1,511 +0,0 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "984169ca",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Agent VectorDB Question Answering Benchmarking\n",
|
||||
"\n",
|
||||
"Here we go over how to benchmark performance on a question answering task using an agent to route between multiple vectordatabases.\n",
|
||||
"\n",
|
||||
"It is highly recommended that you do any evaluation/benchmarking with tracing enabled. See [here](https://python.langchain.com/guides/tracing/) for an explanation of what tracing is and how to set it up."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "8a16b75d",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Loading the data\n",
|
||||
"First, let's load the data."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"id": "5b2d5e98",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stderr",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Found cached dataset json (/Users/qt/.cache/huggingface/datasets/LangChainDatasets___json/LangChainDatasets--agent-vectordb-qa-sota-pg-d3ae24016b514f92/0.0.0/fe5dd6ea2639a6df622901539cb550cf8797e5a6b2dd7af1cf934bed8e233e6e)\n",
|
||||
"100%|██████████| 1/1 [00:00<00:00, 414.42it/s]\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain.evaluation.loading import load_dataset\n",
|
||||
"\n",
|
||||
"dataset = load_dataset(\"agent-vectordb-qa-sota-pg\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"id": "61375342",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"{'question': 'What is the purpose of the NATO Alliance?',\n",
|
||||
" 'answer': 'The purpose of the NATO Alliance is to secure peace and stability in Europe after World War 2.',\n",
|
||||
" 'steps': [{'tool': 'State of Union QA System', 'tool_input': None},\n",
|
||||
" {'tool': None, 'tool_input': 'What is the purpose of the NATO Alliance?'}]}"
|
||||
]
|
||||
},
|
||||
"execution_count": 3,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"dataset[0]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"id": "02500304",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"{'question': 'What is the purpose of YC?',\n",
|
||||
" 'answer': 'The purpose of YC is to cause startups to be founded that would not otherwise have existed.',\n",
|
||||
" 'steps': [{'tool': 'Paul Graham QA System', 'tool_input': None},\n",
|
||||
" {'tool': None, 'tool_input': 'What is the purpose of YC?'}]}"
|
||||
]
|
||||
},
|
||||
"execution_count": 4,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"dataset[-1]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "4ab6a716",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Setting up a chain\n",
|
||||
"Now we need to create some pipelines for doing question answering. Step one in that is creating indexes over the data in question."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"id": "c18680b5",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.document_loaders import TextLoader\n",
|
||||
"\n",
|
||||
"loader = TextLoader(\"../../modules/state_of_the_union.txt\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"id": "7f0de2b3",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.indexes import VectorstoreIndexCreator"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 12,
|
||||
"id": "ef84ff99",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stderr",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Using embedded DuckDB without persistence: data will be transient\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"vectorstore_sota = (\n",
|
||||
" VectorstoreIndexCreator(vectorstore_kwargs={\"collection_name\": \"sota\"})\n",
|
||||
" .from_loaders([loader])\n",
|
||||
" .vectorstore\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "f0b5d8f6",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Now we can create a question answering chain."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 13,
|
||||
"id": "8843cb0c",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.chains import RetrievalQA\n",
|
||||
"from langchain.llms import OpenAI"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 16,
|
||||
"id": "573719a0",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"chain_sota = RetrievalQA.from_chain_type(\n",
|
||||
" llm=OpenAI(temperature=0),\n",
|
||||
" chain_type=\"stuff\",\n",
|
||||
" retriever=vectorstore_sota.as_retriever(),\n",
|
||||
" input_key=\"question\",\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "e48b03d8",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Now we do the same for the Paul Graham data."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 17,
|
||||
"id": "c2dbb014",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"loader = TextLoader(\"../../modules/paul_graham_essay.txt\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 19,
|
||||
"id": "98d16f08",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stderr",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Using embedded DuckDB without persistence: data will be transient\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"vectorstore_pg = (\n",
|
||||
" VectorstoreIndexCreator(vectorstore_kwargs={\"collection_name\": \"paul_graham\"})\n",
|
||||
" .from_loaders([loader])\n",
|
||||
" .vectorstore\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 20,
|
||||
"id": "ec0aab02",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"chain_pg = RetrievalQA.from_chain_type(\n",
|
||||
" llm=OpenAI(temperature=0),\n",
|
||||
" chain_type=\"stuff\",\n",
|
||||
" retriever=vectorstore_pg.as_retriever(),\n",
|
||||
" input_key=\"question\",\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "76b5f8fb",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"We can now set up an agent to route between them."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 22,
|
||||
"id": "ade1aafa",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.agents import initialize_agent, Tool\n",
|
||||
"from langchain.agents import AgentType\n",
|
||||
"\n",
|
||||
"tools = [\n",
|
||||
" Tool(\n",
|
||||
" name=\"State of Union QA System\",\n",
|
||||
" func=chain_sota.run,\n",
|
||||
" description=\"useful for when you need to answer questions about the most recent state of the union address. Input should be a fully formed question.\",\n",
|
||||
" ),\n",
|
||||
" Tool(\n",
|
||||
" name=\"Paul Graham System\",\n",
|
||||
" func=chain_pg.run,\n",
|
||||
" description=\"useful for when you need to answer questions about Paul Graham. Input should be a fully formed question.\",\n",
|
||||
" ),\n",
|
||||
"]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 34,
|
||||
"id": "104853f8",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"agent = initialize_agent(\n",
|
||||
" tools,\n",
|
||||
" OpenAI(temperature=0),\n",
|
||||
" agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,\n",
|
||||
" max_iterations=4,\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "7f036641",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Make a prediction\n",
|
||||
"\n",
|
||||
"First, we can make predictions one datapoint at a time. Doing it at this level of granularity allows use to explore the outputs in detail, and also is a lot cheaper than running over multiple datapoints"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 35,
|
||||
"id": "4664e79f",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"'The purpose of the NATO Alliance is to secure peace and stability in Europe after World War 2.'"
|
||||
]
|
||||
},
|
||||
"execution_count": 35,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"agent.run(dataset[0][\"question\"])"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "d0c16cd7",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Make many predictions\n",
|
||||
"Now we can make predictions"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 36,
|
||||
"id": "799f6c17",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"predictions = []\n",
|
||||
"predicted_dataset = []\n",
|
||||
"error_dataset = []\n",
|
||||
"for data in dataset:\n",
|
||||
" new_data = {\"input\": data[\"question\"], \"answer\": data[\"answer\"]}\n",
|
||||
" try:\n",
|
||||
" predictions.append(agent(new_data))\n",
|
||||
" predicted_dataset.append(new_data)\n",
|
||||
" except Exception:\n",
|
||||
" error_dataset.append(new_data)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "49d969fb",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Evaluate performance\n",
|
||||
"Now we can evaluate the predictions. The first thing we can do is look at them by eye."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 37,
|
||||
"id": "1d583f03",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"{'input': 'What is the purpose of the NATO Alliance?',\n",
|
||||
" 'answer': 'The purpose of the NATO Alliance is to secure peace and stability in Europe after World War 2.',\n",
|
||||
" 'output': 'The purpose of the NATO Alliance is to secure peace and stability in Europe after World War 2.'}"
|
||||
]
|
||||
},
|
||||
"execution_count": 37,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"predictions[0]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "4783344b",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Next, we can use a language model to score them programatically"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 38,
|
||||
"id": "d0a9341d",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.evaluation.qa import QAEvalChain"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 39,
|
||||
"id": "1612dec1",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"llm = OpenAI(temperature=0)\n",
|
||||
"eval_chain = QAEvalChain.from_llm(llm)\n",
|
||||
"graded_outputs = eval_chain.evaluate(\n",
|
||||
" predicted_dataset, predictions, question_key=\"input\", prediction_key=\"output\"\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "79587806",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"We can add in the graded output to the `predictions` dict and then get a count of the grades."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 40,
|
||||
"id": "2a689df5",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"for i, prediction in enumerate(predictions):\n",
|
||||
" prediction[\"grade\"] = graded_outputs[i][\"text\"]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 41,
|
||||
"id": "27b61215",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"Counter({' CORRECT': 28, ' INCORRECT': 5})"
|
||||
]
|
||||
},
|
||||
"execution_count": 41,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from collections import Counter\n",
|
||||
"\n",
|
||||
"Counter([pred[\"grade\"] for pred in predictions])"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "12fe30f4",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"We can also filter the datapoints to the incorrect examples and look at them."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 42,
|
||||
"id": "47c692a1",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"incorrect = [pred for pred in predictions if pred[\"grade\"] == \" INCORRECT\"]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 43,
|
||||
"id": "0ef976c1",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"{'input': 'What are the four common sense steps that the author suggests to move forward safely?',\n",
|
||||
" 'answer': 'The four common sense steps suggested by the author to move forward safely are: stay protected with vaccines and treatments, prepare for new variants, end the shutdown of schools and businesses, and stay vigilant.',\n",
|
||||
" 'output': 'The four common sense steps suggested in the most recent State of the Union address are: cutting the cost of prescription drugs, providing a pathway to citizenship for Dreamers, revising laws so businesses have the workers they need and families don’t wait decades to reunite, and protecting access to health care and preserving a woman’s right to choose.',\n",
|
||||
" 'grade': ' INCORRECT'}"
|
||||
]
|
||||
},
|
||||
"execution_count": 43,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"incorrect[0]"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.11.2"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
@ -24,18 +24,15 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"execution_count": 1,
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.chat_models import ChatOpenAI\n",
|
||||
"from langchain.evaluation.comparison import PairwiseStringEvalChain\n",
|
||||
"from langchain.evaluation import load_evaluator\n",
|
||||
"\n",
|
||||
"llm = ChatOpenAI(model=\"gpt-4\")\n",
|
||||
"\n",
|
||||
"eval_chain = PairwiseStringEvalChain.from_llm(llm=llm)"
|
||||
"eval_chain = load_evaluator(\"pairwise_string\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -50,7 +47,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"execution_count": 2,
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
@ -59,13 +56,13 @@
|
||||
"name": "stderr",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Found cached dataset parquet (/Users/wfh/.cache/huggingface/datasets/LangChainDatasets___parquet/LangChainDatasets--langchain-howto-queries-bbb748bbee7e77aa/0.0.0/2a3b91fbd88a2c90d1dbbb32b460cf621d31bd5b05b934492fdef7d8d6f236ec)\n"
|
||||
"Found cached dataset parquet (/Users/wfh/.cache/huggingface/datasets/LangChainDatasets___parquet/LangChainDatasets--langchain-howto-queries-bbb748bbee7e77aa/0.0.0/14a00e99c0d15a23649d0db8944380ac81082d4b021f398733dd84f3a6c569a7)\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"application/vnd.jupyter.widget-view+json": {
|
||||
"model_id": "d852a1884480457292c90d8bd9d4f1e6",
|
||||
"model_id": "a2358d37246640ce95e0f9940194590a",
|
||||
"version_major": 2,
|
||||
"version_minor": 0
|
||||
},
|
||||
@ -94,7 +91,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"execution_count": 3,
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
@ -127,7 +124,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"execution_count": 4,
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
@ -152,7 +149,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"execution_count": 5,
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
@ -160,7 +157,7 @@
|
||||
{
|
||||
"data": {
|
||||
"application/vnd.jupyter.widget-view+json": {
|
||||
"model_id": "b076d6bf6680422aa9082d4bad4d98a3",
|
||||
"model_id": "87277cb39a1a4726bb7cc533a24e2ea4",
|
||||
"version_major": 2,
|
||||
"version_minor": 0
|
||||
},
|
||||
@ -170,14 +167,6 @@
|
||||
},
|
||||
"metadata": {},
|
||||
"output_type": "display_data"
|
||||
},
|
||||
{
|
||||
"name": "stderr",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Retrying langchain.chat_models.openai.acompletion_with_retry.<locals>._completion_with_retry in 1.0 seconds as it raised ServiceUnavailableError: The server is overloaded or not ready yet..\n",
|
||||
"Retrying langchain.chat_models.openai.acompletion_with_retry.<locals>._completion_with_retry in 1.0 seconds as it raised ServiceUnavailableError: The server is overloaded or not ready yet..\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
@ -215,7 +204,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 9,
|
||||
"execution_count": 6,
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
@ -252,7 +241,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 10,
|
||||
"execution_count": 7,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@ -270,7 +259,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 11,
|
||||
"execution_count": 8,
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
@ -279,8 +268,8 @@
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"OpenAI Functions Agent: 90.00%\n",
|
||||
"Structured Chat Agent: 10.00%\n"
|
||||
"OpenAI Functions Agent: 95.00%\n",
|
||||
"None: 5.00%\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
@ -310,7 +299,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 12,
|
||||
"execution_count": 9,
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
@ -349,7 +338,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 13,
|
||||
"execution_count": 10,
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
@ -358,8 +347,8 @@
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"The \"OpenAI Functions Agent\" would be preferred between 69.90% and 97.21% percent of the time (with 95% confidence).\n",
|
||||
"The \"Structured Chat Agent\" would be preferred between 2.79% and 30.10% percent of the time (with 95% confidence).\n"
|
||||
"The \"OpenAI Functions Agent\" would be preferred between 83.18% and 100.00% percent of the time (with 95% confidence).\n",
|
||||
"The \"Structured Chat Agent\" would be preferred between 0.00% and 16.82% percent of the time (with 95% confidence).\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
@ -380,7 +369,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 18,
|
||||
"execution_count": 11,
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
@ -389,9 +378,17 @@
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"The p-value is 0.00040. If the null hypothesis is true (i.e., if the selected eval chain actually has no preference between the models),\n",
|
||||
"then there is a 0.04025% chance of observing the OpenAI Functions Agent be preferred at least 18\n",
|
||||
"times out of 20 trials.\n"
|
||||
"The p-value is 0.00000. If the null hypothesis is true (i.e., if the selected eval chain actually has no preference between the models),\n",
|
||||
"then there is a 0.00038% chance of observing the OpenAI Functions Agent be preferred at least 19\n",
|
||||
"times out of 19 trials.\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "stderr",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"/var/folders/gf/6rnp_mbx5914kx7qmmh7xzmw0000gn/T/ipykernel_15978/384907688.py:6: DeprecationWarning: 'binom_test' is deprecated in favour of 'binomtest' from version 1.7.0 and will be removed in Scipy 1.12.0.\n",
|
||||
" p_value = stats.binom_test(successes, n, p=0.5, alternative=\"two-sided\")\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
|
@ -1,445 +0,0 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "e78b7bb1",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Data Augmented Question Answering\n",
|
||||
"\n",
|
||||
"This notebook uses some generic prompts/language models to evaluate an question answering system that uses other sources of data besides what is in the model. For example, this can be used to evaluate a question answering system over your proprietary data.\n",
|
||||
"\n",
|
||||
"## Setup\n",
|
||||
"Let's set up an example with our favorite example - the state of the union address."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"id": "ab4a6931",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.embeddings.openai import OpenAIEmbeddings\n",
|
||||
"from langchain.vectorstores import Chroma\n",
|
||||
"from langchain.text_splitter import CharacterTextSplitter\n",
|
||||
"from langchain.llms import OpenAI\n",
|
||||
"from langchain.chains import RetrievalQA"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"id": "4fdc211d",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Running Chroma using direct local API.\n",
|
||||
"Using DuckDB in-memory for database. Data will be transient.\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain.document_loaders import TextLoader\n",
|
||||
"\n",
|
||||
"loader = TextLoader(\"../../modules/state_of_the_union.txt\")\n",
|
||||
"documents = loader.load()\n",
|
||||
"text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\n",
|
||||
"texts = text_splitter.split_documents(documents)\n",
|
||||
"\n",
|
||||
"embeddings = OpenAIEmbeddings()\n",
|
||||
"docsearch = Chroma.from_documents(texts, embeddings)\n",
|
||||
"qa = RetrievalQA.from_llm(llm=OpenAI(), retriever=docsearch.as_retriever())"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "30fd72f2",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Examples\n",
|
||||
"Now we need some examples to evaluate. We can do this in two ways:\n",
|
||||
"\n",
|
||||
"1. Hard code some examples ourselves\n",
|
||||
"2. Generate examples automatically, using a language model"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"id": "3459b001",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Hard-coded examples\n",
|
||||
"examples = [\n",
|
||||
" {\n",
|
||||
" \"query\": \"What did the president say about Ketanji Brown Jackson\",\n",
|
||||
" \"answer\": \"He praised her legal ability and said he nominated her for the supreme court.\",\n",
|
||||
" },\n",
|
||||
" {\"query\": \"What did the president say about Michael Jackson\", \"answer\": \"Nothing\"},\n",
|
||||
"]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"id": "b9c3fa75",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Generated examples\n",
|
||||
"from langchain.evaluation.qa import QAGenerateChain\n",
|
||||
"\n",
|
||||
"example_gen_chain = QAGenerateChain.from_llm(OpenAI())"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"id": "c24543a9",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"new_examples = example_gen_chain.apply_and_parse([{\"doc\": t} for t in texts[:5]])"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"id": "a2d27560",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"[{'query': 'According to the document, what did Vladimir Putin miscalculate?',\n",
|
||||
" 'answer': 'He miscalculated that he could roll into Ukraine and the world would roll over.'},\n",
|
||||
" {'query': 'Who is the Ukrainian Ambassador to the United States?',\n",
|
||||
" 'answer': 'The Ukrainian Ambassador to the United States is here tonight.'},\n",
|
||||
" {'query': 'How many countries were part of the coalition formed to confront Putin?',\n",
|
||||
" 'answer': '27 members of the European Union, France, Germany, Italy, the United Kingdom, Canada, Japan, Korea, Australia, New Zealand, and many others, even Switzerland.'},\n",
|
||||
" {'query': 'What action is the U.S. Department of Justice taking to target Russian oligarchs?',\n",
|
||||
" 'answer': 'The U.S. Department of Justice is assembling a dedicated task force to go after the crimes of Russian oligarchs and joining with European allies to find and seize their yachts, luxury apartments, and private jets.'},\n",
|
||||
" {'query': 'How much direct assistance is the United States providing to Ukraine?',\n",
|
||||
" 'answer': 'The United States is providing more than $1 Billion in direct assistance to Ukraine.'}]"
|
||||
]
|
||||
},
|
||||
"execution_count": 6,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"new_examples"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"id": "558da6f3",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Combine examples\n",
|
||||
"examples += new_examples"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "443dc34e",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Evaluate\n",
|
||||
"Now that we have examples, we can use the question answering evaluator to evaluate our question answering chain."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 8,
|
||||
"id": "782169a5",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.evaluation.qa import QAEvalChain"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 9,
|
||||
"id": "1bb77416",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"predictions = qa.apply(examples)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 10,
|
||||
"id": "bcd0ad7f",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"llm = OpenAI(temperature=0)\n",
|
||||
"eval_chain = QAEvalChain.from_llm(llm)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 11,
|
||||
"id": "2e6af79a",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"graded_outputs = eval_chain.evaluate(examples, predictions)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 12,
|
||||
"id": "32fac2dc",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Example 0:\n",
|
||||
"Question: What did the president say about Ketanji Brown Jackson\n",
|
||||
"Real Answer: He praised her legal ability and said he nominated her for the supreme court.\n",
|
||||
"Predicted Answer: The president said that she is one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, and from a family of public school educators and police officers. He also said that she is a consensus builder and that she has received a broad range of support from the Fraternal Order of Police to former judges appointed by both Democrats and Republicans.\n",
|
||||
"Predicted Grade: CORRECT\n",
|
||||
"\n",
|
||||
"Example 1:\n",
|
||||
"Question: What did the president say about Michael Jackson\n",
|
||||
"Real Answer: Nothing\n",
|
||||
"Predicted Answer: The president did not mention Michael Jackson in this speech.\n",
|
||||
"Predicted Grade: CORRECT\n",
|
||||
"\n",
|
||||
"Example 2:\n",
|
||||
"Question: According to the document, what did Vladimir Putin miscalculate?\n",
|
||||
"Real Answer: He miscalculated that he could roll into Ukraine and the world would roll over.\n",
|
||||
"Predicted Answer: Putin miscalculated that the world would roll over when he rolled into Ukraine.\n",
|
||||
"Predicted Grade: CORRECT\n",
|
||||
"\n",
|
||||
"Example 3:\n",
|
||||
"Question: Who is the Ukrainian Ambassador to the United States?\n",
|
||||
"Real Answer: The Ukrainian Ambassador to the United States is here tonight.\n",
|
||||
"Predicted Answer: I don't know.\n",
|
||||
"Predicted Grade: INCORRECT\n",
|
||||
"\n",
|
||||
"Example 4:\n",
|
||||
"Question: How many countries were part of the coalition formed to confront Putin?\n",
|
||||
"Real Answer: 27 members of the European Union, France, Germany, Italy, the United Kingdom, Canada, Japan, Korea, Australia, New Zealand, and many others, even Switzerland.\n",
|
||||
"Predicted Answer: The coalition included freedom-loving nations from Europe and the Americas to Asia and Africa, 27 members of the European Union including France, Germany, Italy, the United Kingdom, Canada, Japan, Korea, Australia, New Zealand, and many others, even Switzerland.\n",
|
||||
"Predicted Grade: INCORRECT\n",
|
||||
"\n",
|
||||
"Example 5:\n",
|
||||
"Question: What action is the U.S. Department of Justice taking to target Russian oligarchs?\n",
|
||||
"Real Answer: The U.S. Department of Justice is assembling a dedicated task force to go after the crimes of Russian oligarchs and joining with European allies to find and seize their yachts, luxury apartments, and private jets.\n",
|
||||
"Predicted Answer: The U.S. Department of Justice is assembling a dedicated task force to go after the crimes of Russian oligarchs and to find and seize their yachts, luxury apartments, and private jets.\n",
|
||||
"Predicted Grade: INCORRECT\n",
|
||||
"\n",
|
||||
"Example 6:\n",
|
||||
"Question: How much direct assistance is the United States providing to Ukraine?\n",
|
||||
"Real Answer: The United States is providing more than $1 Billion in direct assistance to Ukraine.\n",
|
||||
"Predicted Answer: The United States is providing more than $1 billion in direct assistance to Ukraine.\n",
|
||||
"Predicted Grade: CORRECT\n",
|
||||
"\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"for i, eg in enumerate(examples):\n",
|
||||
" print(f\"Example {i}:\")\n",
|
||||
" print(\"Question: \" + predictions[i][\"query\"])\n",
|
||||
" print(\"Real Answer: \" + predictions[i][\"answer\"])\n",
|
||||
" print(\"Predicted Answer: \" + predictions[i][\"result\"])\n",
|
||||
" print(\"Predicted Grade: \" + graded_outputs[i][\"text\"])\n",
|
||||
" print()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "50a9e845",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Evaluate with Other Metrics\n",
|
||||
"\n",
|
||||
"In addition to predicting whether the answer is correct or incorrect using a language model, we can also use other metrics to get a more nuanced view on the quality of the answers. To do so, we can use the [Critique](https://docs.inspiredco.ai/critique/) library, which allows for simple calculation of various metrics over generated text.\n",
|
||||
"\n",
|
||||
"First you can get an API key from the [Inspired Cognition Dashboard](https://dashboard.inspiredco.ai) and do some setup:\n",
|
||||
"\n",
|
||||
"```bash\n",
|
||||
"export INSPIREDCO_API_KEY=\"...\"\n",
|
||||
"pip install inspiredco\n",
|
||||
"```"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 13,
|
||||
"id": "bd0b01dc",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import inspiredco.critique\n",
|
||||
"import os\n",
|
||||
"\n",
|
||||
"critique = inspiredco.critique.Critique(api_key=os.environ[\"INSPIREDCO_API_KEY\"])"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "4f52629e",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Then run the following code to set up the configuration and calculate the [ROUGE](https://docs.inspiredco.ai/critique/metric_rouge.html), [chrf](https://docs.inspiredco.ai/critique/metric_chrf.html), [BERTScore](https://docs.inspiredco.ai/critique/metric_bert_score.html), and [UniEval](https://docs.inspiredco.ai/critique/metric_uni_eval.html) (you can choose [other metrics](https://docs.inspiredco.ai/critique/metrics.html) too):"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 14,
|
||||
"id": "84a0ba21",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"metrics = {\n",
|
||||
" \"rouge\": {\n",
|
||||
" \"metric\": \"rouge\",\n",
|
||||
" \"config\": {\"variety\": \"rouge_l\"},\n",
|
||||
" },\n",
|
||||
" \"chrf\": {\n",
|
||||
" \"metric\": \"chrf\",\n",
|
||||
" \"config\": {},\n",
|
||||
" },\n",
|
||||
" \"bert_score\": {\n",
|
||||
" \"metric\": \"bert_score\",\n",
|
||||
" \"config\": {\"model\": \"bert-base-uncased\"},\n",
|
||||
" },\n",
|
||||
" \"uni_eval\": {\n",
|
||||
" \"metric\": \"uni_eval\",\n",
|
||||
" \"config\": {\"task\": \"summarization\", \"evaluation_aspect\": \"relevance\"},\n",
|
||||
" },\n",
|
||||
"}"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 15,
|
||||
"id": "3b9a4056",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"critique_data = [\n",
|
||||
" {\"target\": pred[\"result\"], \"references\": [pred[\"answer\"]]} for pred in predictions\n",
|
||||
"]\n",
|
||||
"eval_results = {\n",
|
||||
" k: critique.evaluate(dataset=critique_data, metric=v[\"metric\"], config=v[\"config\"])\n",
|
||||
" for k, v in metrics.items()\n",
|
||||
"}"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "6f0ae799",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Finally, we can print out the results. We can see that overall the scores are higher when the output is semantically correct, and also when the output closely matches with the gold-standard answer."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 16,
|
||||
"id": "b51edcf4",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Example 0:\n",
|
||||
"Question: What did the president say about Ketanji Brown Jackson\n",
|
||||
"Real Answer: He praised her legal ability and said he nominated her for the supreme court.\n",
|
||||
"Predicted Answer: The president said that she is one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, and from a family of public school educators and police officers. He also said that she is a consensus builder and that she has received a broad range of support from the Fraternal Order of Police to former judges appointed by both Democrats and Republicans.\n",
|
||||
"Predicted Scores: rouge=0.0941, chrf=0.2001, bert_score=0.5219, uni_eval=0.9043\n",
|
||||
"\n",
|
||||
"Example 1:\n",
|
||||
"Question: What did the president say about Michael Jackson\n",
|
||||
"Real Answer: Nothing\n",
|
||||
"Predicted Answer: The president did not mention Michael Jackson in this speech.\n",
|
||||
"Predicted Scores: rouge=0.0000, chrf=0.1087, bert_score=0.3486, uni_eval=0.7802\n",
|
||||
"\n",
|
||||
"Example 2:\n",
|
||||
"Question: According to the document, what did Vladimir Putin miscalculate?\n",
|
||||
"Real Answer: He miscalculated that he could roll into Ukraine and the world would roll over.\n",
|
||||
"Predicted Answer: Putin miscalculated that the world would roll over when he rolled into Ukraine.\n",
|
||||
"Predicted Scores: rouge=0.5185, chrf=0.6955, bert_score=0.8421, uni_eval=0.9578\n",
|
||||
"\n",
|
||||
"Example 3:\n",
|
||||
"Question: Who is the Ukrainian Ambassador to the United States?\n",
|
||||
"Real Answer: The Ukrainian Ambassador to the United States is here tonight.\n",
|
||||
"Predicted Answer: I don't know.\n",
|
||||
"Predicted Scores: rouge=0.0000, chrf=0.0375, bert_score=0.3159, uni_eval=0.7493\n",
|
||||
"\n",
|
||||
"Example 4:\n",
|
||||
"Question: How many countries were part of the coalition formed to confront Putin?\n",
|
||||
"Real Answer: 27 members of the European Union, France, Germany, Italy, the United Kingdom, Canada, Japan, Korea, Australia, New Zealand, and many others, even Switzerland.\n",
|
||||
"Predicted Answer: The coalition included freedom-loving nations from Europe and the Americas to Asia and Africa, 27 members of the European Union including France, Germany, Italy, the United Kingdom, Canada, Japan, Korea, Australia, New Zealand, and many others, even Switzerland.\n",
|
||||
"Predicted Scores: rouge=0.7419, chrf=0.8602, bert_score=0.8388, uni_eval=0.0669\n",
|
||||
"\n",
|
||||
"Example 5:\n",
|
||||
"Question: What action is the U.S. Department of Justice taking to target Russian oligarchs?\n",
|
||||
"Real Answer: The U.S. Department of Justice is assembling a dedicated task force to go after the crimes of Russian oligarchs and joining with European allies to find and seize their yachts, luxury apartments, and private jets.\n",
|
||||
"Predicted Answer: The U.S. Department of Justice is assembling a dedicated task force to go after the crimes of Russian oligarchs and to find and seize their yachts, luxury apartments, and private jets.\n",
|
||||
"Predicted Scores: rouge=0.9412, chrf=0.8687, bert_score=0.9607, uni_eval=0.9718\n",
|
||||
"\n",
|
||||
"Example 6:\n",
|
||||
"Question: How much direct assistance is the United States providing to Ukraine?\n",
|
||||
"Real Answer: The United States is providing more than $1 Billion in direct assistance to Ukraine.\n",
|
||||
"Predicted Answer: The United States is providing more than $1 billion in direct assistance to Ukraine.\n",
|
||||
"Predicted Scores: rouge=1.0000, chrf=0.9483, bert_score=1.0000, uni_eval=0.9734\n",
|
||||
"\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"for i, eg in enumerate(examples):\n",
|
||||
" score_string = \", \".join(\n",
|
||||
" [f\"{k}={v['examples'][i]['value']:.4f}\" for k, v in eval_results.items()]\n",
|
||||
" )\n",
|
||||
" print(f\"Example {i}:\")\n",
|
||||
" print(\"Question: \" + predictions[i][\"query\"])\n",
|
||||
" print(\"Real Answer: \" + predictions[i][\"answer\"])\n",
|
||||
" print(\"Predicted Answer: \" + predictions[i][\"result\"])\n",
|
||||
" print(\"Predicted Scores: \" + score_string)\n",
|
||||
" print()"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.11.2"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
@ -1,975 +0,0 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "692f3256",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Evaluating an OpenAPI Chain\n",
|
||||
"\n",
|
||||
"This notebook goes over ways to semantically evaluate an [OpenAPI Chain](/docs/modules/chains/additional/openapi.html), which calls an endpoint defined by the OpenAPI specification using purely natural language."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"id": "a457106d",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.tools import OpenAPISpec, APIOperation\n",
|
||||
"from langchain.chains import OpenAPIEndpointChain, LLMChain\n",
|
||||
"from langchain.requests import Requests\n",
|
||||
"from langchain.llms import OpenAI"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "2c3b0954",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Load the API Chain\n",
|
||||
"\n",
|
||||
"Load a wrapper of the spec (so we can work with it more easily). You can load from a url or from a local file."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"id": "794142ba",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stderr",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Attempting to load an OpenAPI 3.0.1 spec. This may result in degraded performance. Convert your OpenAPI spec to 3.1.* spec for better support.\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"# Load and parse the OpenAPI Spec\n",
|
||||
"spec = OpenAPISpec.from_url(\n",
|
||||
" \"https://www.klarna.com/us/shopping/public/openai/v0/api-docs/\"\n",
|
||||
")\n",
|
||||
"# Load a single endpoint operation\n",
|
||||
"operation = APIOperation.from_openapi_spec(spec, \"/public/openai/v0/products\", \"get\")\n",
|
||||
"verbose = False\n",
|
||||
"# Select any LangChain LLM\n",
|
||||
"llm = OpenAI(temperature=0, max_tokens=1000)\n",
|
||||
"# Create the endpoint chain\n",
|
||||
"api_chain = OpenAPIEndpointChain.from_api_operation(\n",
|
||||
" operation,\n",
|
||||
" llm,\n",
|
||||
" requests=Requests(),\n",
|
||||
" verbose=verbose,\n",
|
||||
" return_intermediate_steps=True, # Return request and response text\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "6c05ba5b",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### *Optional*: Generate Input Questions and Request Ground Truth Queries\n",
|
||||
"\n",
|
||||
"See [Generating Test Datasets](#Generating-Test-Datasets) at the end of this notebook for more details."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"id": "a0c0cb7e",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# import re\n",
|
||||
"# from langchain.prompts import PromptTemplate\n",
|
||||
"\n",
|
||||
"# template = \"\"\"Below is a service description:\n",
|
||||
"\n",
|
||||
"# {spec}\n",
|
||||
"\n",
|
||||
"# Imagine you're a new user trying to use {operation} through a search bar. What are 10 different things you want to request?\n",
|
||||
"# Wants/Questions:\n",
|
||||
"# 1. \"\"\"\n",
|
||||
"\n",
|
||||
"# prompt = PromptTemplate.from_template(template)\n",
|
||||
"\n",
|
||||
"# generation_chain = LLMChain(llm=llm, prompt=prompt)\n",
|
||||
"\n",
|
||||
"# questions_ = generation_chain.run(spec=operation.to_typescript(), operation=operation.operation_id).split('\\n')\n",
|
||||
"# # Strip preceding numeric bullets\n",
|
||||
"# questions = [re.sub(r'^\\d+\\. ', '', q).strip() for q in questions_]\n",
|
||||
"# questions"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"id": "f3d767ef",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# ground_truths = [\n",
|
||||
"# {\"q\": ...} # What are the best queries for each input?\n",
|
||||
"# ]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "81098a05",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Run the API Chain\n",
|
||||
"\n",
|
||||
"The two simplest questions a user of the API Chain are:\n",
|
||||
"- Did the chain succesfully access the endpoint?\n",
|
||||
"- Did the action accomplish the correct result?\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"id": "64bc7ed9",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from collections import defaultdict\n",
|
||||
"\n",
|
||||
"# Collect metrics to report at completion\n",
|
||||
"scores = defaultdict(list)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"id": "dfd2d09f",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stderr",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Found cached dataset json (/Users/harrisonchase/.cache/huggingface/datasets/LangChainDatasets___json/LangChainDatasets--openapi-chain-klarna-products-get-5d03362007667626/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51)\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"application/vnd.jupyter.widget-view+json": {
|
||||
"model_id": "10932c9c139941d1a8be1a798f29e923",
|
||||
"version_major": 2,
|
||||
"version_minor": 0
|
||||
},
|
||||
"text/plain": [
|
||||
" 0%| | 0/1 [00:00<?, ?it/s]"
|
||||
]
|
||||
},
|
||||
"metadata": {},
|
||||
"output_type": "display_data"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain.evaluation.loading import load_dataset\n",
|
||||
"\n",
|
||||
"dataset = load_dataset(\"openapi-chain-klarna-products-get\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"id": "e08191a7",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"[{'question': 'What iPhone models are available?',\n",
|
||||
" 'expected_query': {'max_price': None, 'q': 'iPhone'}},\n",
|
||||
" {'question': 'Are there any budget laptops?',\n",
|
||||
" 'expected_query': {'max_price': 300, 'q': 'laptop'}},\n",
|
||||
" {'question': 'Show me the cheapest gaming PC.',\n",
|
||||
" 'expected_query': {'max_price': 500, 'q': 'gaming pc'}},\n",
|
||||
" {'question': 'Are there any tablets under $400?',\n",
|
||||
" 'expected_query': {'max_price': 400, 'q': 'tablet'}},\n",
|
||||
" {'question': 'What are the best headphones?',\n",
|
||||
" 'expected_query': {'max_price': None, 'q': 'headphones'}},\n",
|
||||
" {'question': 'What are the top rated laptops?',\n",
|
||||
" 'expected_query': {'max_price': None, 'q': 'laptop'}},\n",
|
||||
" {'question': 'I want to buy some shoes. I like Adidas and Nike.',\n",
|
||||
" 'expected_query': {'max_price': None, 'q': 'shoe'}},\n",
|
||||
" {'question': 'I want to buy a new skirt',\n",
|
||||
" 'expected_query': {'max_price': None, 'q': 'skirt'}},\n",
|
||||
" {'question': 'My company is asking me to get a professional Deskopt PC - money is no object.',\n",
|
||||
" 'expected_query': {'max_price': 10000, 'q': 'professional desktop PC'}},\n",
|
||||
" {'question': 'What are the best budget cameras?',\n",
|
||||
" 'expected_query': {'max_price': 300, 'q': 'camera'}}]"
|
||||
]
|
||||
},
|
||||
"execution_count": 7,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"dataset"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 8,
|
||||
"id": "7ee71384",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"questions = [d[\"question\"] for d in dataset]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 9,
|
||||
"id": "00511f7a",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"## Run the the API chain itself\n",
|
||||
"raise_error = False # Stop on first failed example - useful for development\n",
|
||||
"chain_outputs = []\n",
|
||||
"failed_examples = []\n",
|
||||
"for question in questions:\n",
|
||||
" try:\n",
|
||||
" chain_outputs.append(api_chain(question))\n",
|
||||
" scores[\"completed\"].append(1.0)\n",
|
||||
" except Exception as e:\n",
|
||||
" if raise_error:\n",
|
||||
" raise e\n",
|
||||
" failed_examples.append({\"q\": question, \"error\": e})\n",
|
||||
" scores[\"completed\"].append(0.0)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 10,
|
||||
"id": "f3c9729f",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"[]"
|
||||
]
|
||||
},
|
||||
"execution_count": 10,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"# If the chain failed to run, show the failing examples\n",
|
||||
"failed_examples"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 11,
|
||||
"id": "914e7587",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"['There are currently 10 Apple iPhone models available: Apple iPhone 14 Pro Max 256GB, Apple iPhone 12 128GB, Apple iPhone 13 128GB, Apple iPhone 14 Pro 128GB, Apple iPhone 14 Pro 256GB, Apple iPhone 14 Pro Max 128GB, Apple iPhone 13 Pro Max 128GB, Apple iPhone 14 128GB, Apple iPhone 12 Pro 512GB, and Apple iPhone 12 mini 64GB.',\n",
|
||||
" 'Yes, there are several budget laptops in the API response. For example, the HP 14-dq0055dx and HP 15-dw0083wm are both priced at $199.99 and $244.99 respectively.',\n",
|
||||
" 'The cheapest gaming PC available is the Alarco Gaming PC (X_BLACK_GTX750) for $499.99. You can find more information about it here: https://www.klarna.com/us/shopping/pl/cl223/3203154750/Desktop-Computers/Alarco-Gaming-PC-%28X_BLACK_GTX750%29/?utm_source=openai&ref-site=openai_plugin',\n",
|
||||
" 'Yes, there are several tablets under $400. These include the Apple iPad 10.2\" 32GB (2019), Samsung Galaxy Tab A8 10.5 SM-X200 32GB, Samsung Galaxy Tab A7 Lite 8.7 SM-T220 32GB, Amazon Fire HD 8\" 32GB (10th Generation), and Amazon Fire HD 10 32GB.',\n",
|
||||
" 'It looks like you are looking for the best headphones. Based on the API response, it looks like the Apple AirPods Pro (2nd generation) 2022, Apple AirPods Max, and Bose Noise Cancelling Headphones 700 are the best options.',\n",
|
||||
" 'The top rated laptops based on the API response are the Apple MacBook Pro (2021) M1 Pro 8C CPU 14C GPU 16GB 512GB SSD 14\", Apple MacBook Pro (2022) M2 OC 10C GPU 8GB 256GB SSD 13.3\", Apple MacBook Air (2022) M2 OC 8C GPU 8GB 256GB SSD 13.6\", and Apple MacBook Pro (2023) M2 Pro OC 16C GPU 16GB 512GB SSD 14.2\".',\n",
|
||||
" \"I found several Nike and Adidas shoes in the API response. Here are the links to the products: Nike Dunk Low M - Black/White: https://www.klarna.com/us/shopping/pl/cl337/3200177969/Shoes/Nike-Dunk-Low-M-Black-White/?utm_source=openai&ref-site=openai_plugin, Nike Air Jordan 4 Retro M - Midnight Navy: https://www.klarna.com/us/shopping/pl/cl337/3202929835/Shoes/Nike-Air-Jordan-4-Retro-M-Midnight-Navy/?utm_source=openai&ref-site=openai_plugin, Nike Air Force 1 '07 M - White: https://www.klarna.com/us/shopping/pl/cl337/3979297/Shoes/Nike-Air-Force-1-07-M-White/?utm_source=openai&ref-site=openai_plugin, Nike Dunk Low W - White/Black: https://www.klarna.com/us/shopping/pl/cl337/3200134705/Shoes/Nike-Dunk-Low-W-White-Black/?utm_source=openai&ref-site=openai_plugin, Nike Air Jordan 1 Retro High M - White/University Blue/Black: https://www.klarna.com/us/shopping/pl/cl337/3200383658/Shoes/Nike-Air-Jordan-1-Retro-High-M-White-University-Blue-Black/?utm_source=openai&ref-site=openai_plugin, Nike Air Jordan 1 Retro High OG M - True Blue/Cement Grey/White: https://www.klarna.com/us/shopping/pl/cl337/3204655673/Shoes/Nike-Air-Jordan-1-Retro-High-OG-M-True-Blue-Cement-Grey-White/?utm_source=openai&ref-site=openai_plugin, Nike Air Jordan 11 Retro Cherry - White/Varsity Red/Black: https://www.klarna.com/us/shopping/pl/cl337/3202929696/Shoes/Nike-Air-Jordan-11-Retro-Cherry-White-Varsity-Red-Black/?utm_source=openai&ref-site=openai_plugin, Nike Dunk High W - White/Black: https://www.klarna.com/us/shopping/pl/cl337/3201956448/Shoes/Nike-Dunk-High-W-White-Black/?utm_source=openai&ref-site=openai_plugin, Nike Air Jordan 5 Retro M - Black/Taxi/Aquatone: https://www.klarna.com/us/shopping/pl/cl337/3204923084/Shoes/Nike-Air-Jordan-5-Retro-M-Black-Taxi-Aquatone/?utm_source=openai&ref-site=openai_plugin, Nike Court Legacy Lift W: https://www.klarna.com/us/shopping/pl/cl337/3202103728/Shoes/Nike-Court-Legacy-Lift-W/?utm_source=openai&ref-site=openai_plugin\",\n",
|
||||
" \"I found several skirts that may interest you. Please take a look at the following products: Avenue Plus Size Denim Stretch Skirt, LoveShackFancy Ruffled Mini Skirt - Antique White, Nike Dri-Fit Club Golf Skirt - Active Pink, Skims Soft Lounge Ruched Long Skirt, French Toast Girl's Front Pleated Skirt with Tabs, Alexia Admor Women's Harmonie Mini Skirt Pink Pink, Vero Moda Long Skirt, Nike Court Dri-FIT Victory Flouncy Tennis Skirt Women - White/Black, Haoyuan Mini Pleated Skirts W, and Zimmermann Lyre Midi Skirt.\",\n",
|
||||
" 'Based on the API response, you may want to consider the Skytech Archangel Gaming Computer PC Desktop, the CyberPowerPC Gamer Master Gaming Desktop, or the ASUS ROG Strix G10DK-RS756, as they all offer powerful processors and plenty of RAM.',\n",
|
||||
" 'Based on the API response, the best budget cameras are the DJI Mini 2 Dog Camera ($448.50), Insta360 Sphere with Landing Pad ($429.99), DJI FPV Gimbal Camera ($121.06), Parrot Camera & Body ($36.19), and DJI FPV Air Unit ($179.00).']"
|
||||
]
|
||||
},
|
||||
"execution_count": 11,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"answers = [res[\"output\"] for res in chain_outputs]\n",
|
||||
"answers"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "484f0587",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Evaluate the requests chain\n",
|
||||
"\n",
|
||||
"The API Chain has two main components:\n",
|
||||
"1. Translate the user query to an API request (request synthesizer)\n",
|
||||
"2. Translate the API response to a natural language response\n",
|
||||
"\n",
|
||||
"Here, we construct an evaluation chain to grade the request synthesizer against selected human queries "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 12,
|
||||
"id": "3ea5afd7",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import json\n",
|
||||
"\n",
|
||||
"truth_queries = [json.dumps(data[\"expected_query\"]) for data in dataset]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 13,
|
||||
"id": "e055f24b",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Collect the API queries generated by the chain\n",
|
||||
"predicted_queries = [\n",
|
||||
" output[\"intermediate_steps\"][\"request_args\"] for output in chain_outputs\n",
|
||||
"]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 14,
|
||||
"id": "7d4f2b88",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.prompts import PromptTemplate\n",
|
||||
"\n",
|
||||
"template = \"\"\"You are trying to answer the following question by querying an API:\n",
|
||||
"\n",
|
||||
"> Question: {question}\n",
|
||||
"\n",
|
||||
"The query you know you should be executing against the API is:\n",
|
||||
"\n",
|
||||
"> Query: {truth_query}\n",
|
||||
"\n",
|
||||
"Is the following predicted query semantically the same (eg likely to produce the same answer)?\n",
|
||||
"\n",
|
||||
"> Predicted Query: {predict_query}\n",
|
||||
"\n",
|
||||
"Please give the Predicted Query a grade of either an A, B, C, D, or F, along with an explanation of why. End the evaluation with 'Final Grade: <the letter>'\n",
|
||||
"\n",
|
||||
"> Explanation: Let's think step by step.\"\"\"\n",
|
||||
"\n",
|
||||
"prompt = PromptTemplate.from_template(template)\n",
|
||||
"\n",
|
||||
"eval_chain = LLMChain(llm=llm, prompt=prompt, verbose=verbose)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 15,
|
||||
"id": "8cc1b1db",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"[' The original query is asking for all iPhone models, so the \"q\" parameter is correct. The \"max_price\" parameter is also correct, as it is set to null, meaning that no maximum price is set. The predicted query adds two additional parameters, \"size\" and \"min_price\". The \"size\" parameter is not necessary, as it is not relevant to the question being asked. The \"min_price\" parameter is also not necessary, as it is not relevant to the question being asked and it is set to 0, which is the default value. Therefore, the predicted query is not semantically the same as the original query and is not likely to produce the same answer. Final Grade: D',\n",
|
||||
" ' The original query is asking for laptops with a maximum price of 300. The predicted query is asking for laptops with a minimum price of 0 and a maximum price of 500. This means that the predicted query is likely to return more results than the original query, as it is asking for a wider range of prices. Therefore, the predicted query is not semantically the same as the original query, and it is not likely to produce the same answer. Final Grade: F',\n",
|
||||
" \" The first two parameters are the same, so that's good. The third parameter is different, but it's not necessary for the query, so that's not a problem. The fourth parameter is the problem. The original query specifies a maximum price of 500, while the predicted query specifies a maximum price of null. This means that the predicted query will not limit the results to the cheapest gaming PCs, so it is not semantically the same as the original query. Final Grade: F\",\n",
|
||||
" ' The original query is asking for tablets under $400, so the first two parameters are correct. The predicted query also includes the parameters \"size\" and \"min_price\", which are not necessary for the original query. The \"size\" parameter is not relevant to the question, and the \"min_price\" parameter is redundant since the original query already specifies a maximum price. Therefore, the predicted query is not semantically the same as the original query and is not likely to produce the same answer. Final Grade: D',\n",
|
||||
" ' The original query is asking for headphones with no maximum price, so the predicted query is not semantically the same because it has a maximum price of 500. The predicted query also has a size of 10, which is not specified in the original query. Therefore, the predicted query is not semantically the same as the original query. Final Grade: F',\n",
|
||||
" \" The original query is asking for the top rated laptops, so the 'size' parameter should be set to 10 to get the top 10 results. The 'min_price' parameter should be set to 0 to get results from all price ranges. The 'max_price' parameter should be set to null to get results from all price ranges. The 'q' parameter should be set to 'laptop' to get results related to laptops. All of these parameters are present in the predicted query, so it is semantically the same as the original query. Final Grade: A\",\n",
|
||||
" ' The original query is asking for shoes, so the predicted query is asking for the same thing. The original query does not specify a size, so the predicted query is not adding any additional information. The original query does not specify a price range, so the predicted query is adding additional information that is not necessary. Therefore, the predicted query is not semantically the same as the original query and is likely to produce different results. Final Grade: D',\n",
|
||||
" ' The original query is asking for a skirt, so the predicted query is asking for the same thing. The predicted query also adds additional parameters such as size and price range, which could help narrow down the results. However, the size parameter is not necessary for the query to be successful, and the price range is too narrow. Therefore, the predicted query is not as effective as the original query. Final Grade: C',\n",
|
||||
" ' The first part of the query is asking for a Desktop PC, which is the same as the original query. The second part of the query is asking for a size of 10, which is not relevant to the original query. The third part of the query is asking for a minimum price of 0, which is not relevant to the original query. The fourth part of the query is asking for a maximum price of null, which is not relevant to the original query. Therefore, the Predicted Query does not semantically match the original query and is not likely to produce the same answer. Final Grade: F',\n",
|
||||
" ' The original query is asking for cameras with a maximum price of 300. The predicted query is asking for cameras with a maximum price of 500. This means that the predicted query is likely to return more results than the original query, which may include cameras that are not within the budget range. Therefore, the predicted query is not semantically the same as the original query and does not answer the original question. Final Grade: F']"
|
||||
]
|
||||
},
|
||||
"execution_count": 15,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"request_eval_results = []\n",
|
||||
"for question, predict_query, truth_query in list(\n",
|
||||
" zip(questions, predicted_queries, truth_queries)\n",
|
||||
"):\n",
|
||||
" eval_output = eval_chain.run(\n",
|
||||
" question=question,\n",
|
||||
" truth_query=truth_query,\n",
|
||||
" predict_query=predict_query,\n",
|
||||
" )\n",
|
||||
" request_eval_results.append(eval_output)\n",
|
||||
"request_eval_results"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 16,
|
||||
"id": "0d76f8ba",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import re\n",
|
||||
"from typing import List\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"# Parse the evaluation chain responses into a rubric\n",
|
||||
"def parse_eval_results(results: List[str]) -> List[float]:\n",
|
||||
" rubric = {\"A\": 1.0, \"B\": 0.75, \"C\": 0.5, \"D\": 0.25, \"F\": 0}\n",
|
||||
" return [rubric[re.search(r\"Final Grade: (\\w+)\", res).group(1)] for res in results]\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"parsed_results = parse_eval_results(request_eval_results)\n",
|
||||
"# Collect the scores for a final evaluation table\n",
|
||||
"scores[\"request_synthesizer\"].extend(parsed_results)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "6f3ee8ea",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Evaluate the Response Chain\n",
|
||||
"\n",
|
||||
"The second component translated the structured API response to a natural language response.\n",
|
||||
"Evaluate this against the user's original question."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 17,
|
||||
"id": "8b97847c",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.prompts import PromptTemplate\n",
|
||||
"\n",
|
||||
"template = \"\"\"You are trying to answer the following question by querying an API:\n",
|
||||
"\n",
|
||||
"> Question: {question}\n",
|
||||
"\n",
|
||||
"The API returned a response of:\n",
|
||||
"\n",
|
||||
"> API result: {api_response}\n",
|
||||
"\n",
|
||||
"Your response to the user: {answer}\n",
|
||||
"\n",
|
||||
"Please evaluate the accuracy and utility of your response to the user's original question, conditioned on the information available.\n",
|
||||
"Give a letter grade of either an A, B, C, D, or F, along with an explanation of why. End the evaluation with 'Final Grade: <the letter>'\n",
|
||||
"\n",
|
||||
"> Explanation: Let's think step by step.\"\"\"\n",
|
||||
"\n",
|
||||
"prompt = PromptTemplate.from_template(template)\n",
|
||||
"\n",
|
||||
"eval_chain = LLMChain(llm=llm, prompt=prompt, verbose=verbose)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 18,
|
||||
"id": "642852ce",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Extract the API responses from the chain\n",
|
||||
"api_responses = [\n",
|
||||
" output[\"intermediate_steps\"][\"response_text\"] for output in chain_outputs\n",
|
||||
"]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 19,
|
||||
"id": "08a5eb4f",
|
||||
"metadata": {
|
||||
"scrolled": true
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"[' The original query is asking for all iPhone models, so the \"q\" parameter is correct. The \"max_price\" parameter is also correct, as it is set to null, meaning that no maximum price is set. The predicted query adds two additional parameters, \"size\" and \"min_price\". The \"size\" parameter is not necessary, as it is not relevant to the question being asked. The \"min_price\" parameter is also not necessary, as it is not relevant to the question being asked and it is set to 0, which is the default value. Therefore, the predicted query is not semantically the same as the original query and is not likely to produce the same answer. Final Grade: D',\n",
|
||||
" ' The original query is asking for laptops with a maximum price of 300. The predicted query is asking for laptops with a minimum price of 0 and a maximum price of 500. This means that the predicted query is likely to return more results than the original query, as it is asking for a wider range of prices. Therefore, the predicted query is not semantically the same as the original query, and it is not likely to produce the same answer. Final Grade: F',\n",
|
||||
" \" The first two parameters are the same, so that's good. The third parameter is different, but it's not necessary for the query, so that's not a problem. The fourth parameter is the problem. The original query specifies a maximum price of 500, while the predicted query specifies a maximum price of null. This means that the predicted query will not limit the results to the cheapest gaming PCs, so it is not semantically the same as the original query. Final Grade: F\",\n",
|
||||
" ' The original query is asking for tablets under $400, so the first two parameters are correct. The predicted query also includes the parameters \"size\" and \"min_price\", which are not necessary for the original query. The \"size\" parameter is not relevant to the question, and the \"min_price\" parameter is redundant since the original query already specifies a maximum price. Therefore, the predicted query is not semantically the same as the original query and is not likely to produce the same answer. Final Grade: D',\n",
|
||||
" ' The original query is asking for headphones with no maximum price, so the predicted query is not semantically the same because it has a maximum price of 500. The predicted query also has a size of 10, which is not specified in the original query. Therefore, the predicted query is not semantically the same as the original query. Final Grade: F',\n",
|
||||
" \" The original query is asking for the top rated laptops, so the 'size' parameter should be set to 10 to get the top 10 results. The 'min_price' parameter should be set to 0 to get results from all price ranges. The 'max_price' parameter should be set to null to get results from all price ranges. The 'q' parameter should be set to 'laptop' to get results related to laptops. All of these parameters are present in the predicted query, so it is semantically the same as the original query. Final Grade: A\",\n",
|
||||
" ' The original query is asking for shoes, so the predicted query is asking for the same thing. The original query does not specify a size, so the predicted query is not adding any additional information. The original query does not specify a price range, so the predicted query is adding additional information that is not necessary. Therefore, the predicted query is not semantically the same as the original query and is likely to produce different results. Final Grade: D',\n",
|
||||
" ' The original query is asking for a skirt, so the predicted query is asking for the same thing. The predicted query also adds additional parameters such as size and price range, which could help narrow down the results. However, the size parameter is not necessary for the query to be successful, and the price range is too narrow. Therefore, the predicted query is not as effective as the original query. Final Grade: C',\n",
|
||||
" ' The first part of the query is asking for a Desktop PC, which is the same as the original query. The second part of the query is asking for a size of 10, which is not relevant to the original query. The third part of the query is asking for a minimum price of 0, which is not relevant to the original query. The fourth part of the query is asking for a maximum price of null, which is not relevant to the original query. Therefore, the Predicted Query does not semantically match the original query and is not likely to produce the same answer. Final Grade: F',\n",
|
||||
" ' The original query is asking for cameras with a maximum price of 300. The predicted query is asking for cameras with a maximum price of 500. This means that the predicted query is likely to return more results than the original query, which may include cameras that are not within the budget range. Therefore, the predicted query is not semantically the same as the original query and does not answer the original question. Final Grade: F',\n",
|
||||
" ' The user asked a question about what iPhone models are available, and the API returned a response with 10 different models. The response provided by the user accurately listed all 10 models, so the accuracy of the response is A+. The utility of the response is also A+ since the user was able to get the exact information they were looking for. Final Grade: A+',\n",
|
||||
" \" The API response provided a list of laptops with their prices and attributes. The user asked if there were any budget laptops, and the response provided a list of laptops that are all priced under $500. Therefore, the response was accurate and useful in answering the user's question. Final Grade: A\",\n",
|
||||
" \" The API response provided the name, price, and URL of the product, which is exactly what the user asked for. The response also provided additional information about the product's attributes, which is useful for the user to make an informed decision. Therefore, the response is accurate and useful. Final Grade: A\",\n",
|
||||
" \" The API response provided a list of tablets that are under $400. The response accurately answered the user's question. Additionally, the response provided useful information such as the product name, price, and attributes. Therefore, the response was accurate and useful. Final Grade: A\",\n",
|
||||
" \" The API response provided a list of headphones with their respective prices and attributes. The user asked for the best headphones, so the response should include the best headphones based on the criteria provided. The response provided a list of headphones that are all from the same brand (Apple) and all have the same type of headphone (True Wireless, In-Ear). This does not provide the user with enough information to make an informed decision about which headphones are the best. Therefore, the response does not accurately answer the user's question. Final Grade: F\",\n",
|
||||
" ' The API response provided a list of laptops with their attributes, which is exactly what the user asked for. The response provided a comprehensive list of the top rated laptops, which is what the user was looking for. The response was accurate and useful, providing the user with the information they needed. Final Grade: A',\n",
|
||||
" ' The API response provided a list of shoes from both Adidas and Nike, which is exactly what the user asked for. The response also included the product name, price, and attributes for each shoe, which is useful information for the user to make an informed decision. The response also included links to the products, which is helpful for the user to purchase the shoes. Therefore, the response was accurate and useful. Final Grade: A',\n",
|
||||
" \" The API response provided a list of skirts that could potentially meet the user's needs. The response also included the name, price, and attributes of each skirt. This is a great start, as it provides the user with a variety of options to choose from. However, the response does not provide any images of the skirts, which would have been helpful for the user to make a decision. Additionally, the response does not provide any information about the availability of the skirts, which could be important for the user. \\n\\nFinal Grade: B\",\n",
|
||||
" ' The user asked for a professional desktop PC with no budget constraints. The API response provided a list of products that fit the criteria, including the Skytech Archangel Gaming Computer PC Desktop, the CyberPowerPC Gamer Master Gaming Desktop, and the ASUS ROG Strix G10DK-RS756. The response accurately suggested these three products as they all offer powerful processors and plenty of RAM. Therefore, the response is accurate and useful. Final Grade: A',\n",
|
||||
" \" The API response provided a list of cameras with their prices, which is exactly what the user asked for. The response also included additional information such as features and memory cards, which is not necessary for the user's question but could be useful for further research. The response was accurate and provided the user with the information they needed. Final Grade: A\"]"
|
||||
]
|
||||
},
|
||||
"execution_count": 19,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"# Run the grader chain\n",
|
||||
"response_eval_results = []\n",
|
||||
"for question, api_response, answer in list(zip(questions, api_responses, answers)):\n",
|
||||
" request_eval_results.append(\n",
|
||||
" eval_chain.run(question=question, api_response=api_response, answer=answer)\n",
|
||||
" )\n",
|
||||
"request_eval_results"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 20,
|
||||
"id": "a144aa9d",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Reusing the rubric from above, parse the evaluation chain responses\n",
|
||||
"parsed_response_results = parse_eval_results(request_eval_results)\n",
|
||||
"# Collect the scores for a final evaluation table\n",
|
||||
"scores[\"result_synthesizer\"].extend(parsed_response_results)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 21,
|
||||
"id": "e95042bc",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Metric \tMin \tMean \tMax \n",
|
||||
"completed \t1.00 \t1.00 \t1.00 \n",
|
||||
"request_synthesizer \t0.00 \t0.23 \t1.00 \n",
|
||||
"result_synthesizer \t0.00 \t0.55 \t1.00 \n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"# Print out Score statistics for the evaluation session\n",
|
||||
"header = \"{:<20}\\t{:<10}\\t{:<10}\\t{:<10}\".format(\"Metric\", \"Min\", \"Mean\", \"Max\")\n",
|
||||
"print(header)\n",
|
||||
"for metric, metric_scores in scores.items():\n",
|
||||
" mean_scores = (\n",
|
||||
" sum(metric_scores) / len(metric_scores)\n",
|
||||
" if len(metric_scores) > 0\n",
|
||||
" else float(\"nan\")\n",
|
||||
" )\n",
|
||||
" row = \"{:<20}\\t{:<10.2f}\\t{:<10.2f}\\t{:<10.2f}\".format(\n",
|
||||
" metric, min(metric_scores), mean_scores, max(metric_scores)\n",
|
||||
" )\n",
|
||||
" print(row)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 22,
|
||||
"id": "03fe96af",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"[]"
|
||||
]
|
||||
},
|
||||
"execution_count": 22,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"# Re-show the examples for which the chain failed to complete\n",
|
||||
"failed_examples"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "2bb3636d",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Generating Test Datasets\n",
|
||||
"\n",
|
||||
"To evaluate a chain against your own endpoint, you'll want to generate a test dataset that's conforms to the API.\n",
|
||||
"\n",
|
||||
"This section provides an overview of how to bootstrap the process.\n",
|
||||
"\n",
|
||||
"First, we'll parse the OpenAPI Spec. For this example, we'll [Speak](https://www.speak.com/)'s OpenAPI specification."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 23,
|
||||
"id": "a453eb93",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stderr",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Attempting to load an OpenAPI 3.0.1 spec. This may result in degraded performance. Convert your OpenAPI spec to 3.1.* spec for better support.\n",
|
||||
"Attempting to load an OpenAPI 3.0.1 spec. This may result in degraded performance. Convert your OpenAPI spec to 3.1.* spec for better support.\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"# Load and parse the OpenAPI Spec\n",
|
||||
"spec = OpenAPISpec.from_url(\"https://api.speak.com/openapi.yaml\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 24,
|
||||
"id": "bb65ffe8",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"['/v1/public/openai/explain-phrase',\n",
|
||||
" '/v1/public/openai/explain-task',\n",
|
||||
" '/v1/public/openai/translate']"
|
||||
]
|
||||
},
|
||||
"execution_count": 24,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"# List the paths in the OpenAPI Spec\n",
|
||||
"paths = sorted(spec.paths.keys())\n",
|
||||
"paths"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 25,
|
||||
"id": "0988f01b",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"['post']"
|
||||
]
|
||||
},
|
||||
"execution_count": 25,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"# See which HTTP Methods are available for a given path\n",
|
||||
"methods = spec.get_methods_for_path(\"/v1/public/openai/explain-task\")\n",
|
||||
"methods"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 26,
|
||||
"id": "e9ef0a77",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"type explainTask = (_: {\n",
|
||||
"/* Description of the task that the user wants to accomplish or do. For example, \"tell the waiter they messed up my order\" or \"compliment someone on their shirt\" */\n",
|
||||
" task_description?: string,\n",
|
||||
"/* The foreign language that the user is learning and asking about. The value can be inferred from question - for example, if the user asks \"how do i ask a girl out in mexico city\", the value should be \"Spanish\" because of Mexico City. Always use the full name of the language (e.g. Spanish, French). */\n",
|
||||
" learning_language?: string,\n",
|
||||
"/* The user's native language. Infer this value from the language the user asked their question in. Always use the full name of the language (e.g. Spanish, French). */\n",
|
||||
" native_language?: string,\n",
|
||||
"/* A description of any additional context in the user's question that could affect the explanation - e.g. setting, scenario, situation, tone, speaking style and formality, usage notes, or any other qualifiers. */\n",
|
||||
" additional_context?: string,\n",
|
||||
"/* Full text of the user's question. */\n",
|
||||
" full_query?: string,\n",
|
||||
"}) => any;\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"# Load a single endpoint operation\n",
|
||||
"operation = APIOperation.from_openapi_spec(\n",
|
||||
" spec, \"/v1/public/openai/explain-task\", \"post\"\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"# The operation can be serialized as typescript\n",
|
||||
"print(operation.to_typescript())"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 27,
|
||||
"id": "f1186b6d",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Compress the service definition to avoid leaking too much input structure to the sample data\n",
|
||||
"template = \"\"\"In 20 words or less, what does this service accomplish?\n",
|
||||
"{spec}\n",
|
||||
"\n",
|
||||
"Function: It's designed to \"\"\"\n",
|
||||
"prompt = PromptTemplate.from_template(template)\n",
|
||||
"generation_chain = LLMChain(llm=llm, prompt=prompt)\n",
|
||||
"purpose = generation_chain.run(spec=operation.to_typescript())"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 28,
|
||||
"id": "a594406a",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"[\"Can you explain how to say 'hello' in Spanish?\",\n",
|
||||
" \"I need help understanding the French word for 'goodbye'.\",\n",
|
||||
" \"Can you tell me how to say 'thank you' in German?\",\n",
|
||||
" \"I'm trying to learn the Italian word for 'please'.\",\n",
|
||||
" \"Can you help me with the pronunciation of 'yes' in Portuguese?\",\n",
|
||||
" \"I'm looking for the Dutch word for 'no'.\",\n",
|
||||
" \"Can you explain the meaning of 'hello' in Japanese?\",\n",
|
||||
" \"I need help understanding the Russian word for 'thank you'.\",\n",
|
||||
" \"Can you tell me how to say 'goodbye' in Chinese?\",\n",
|
||||
" \"I'm trying to learn the Arabic word for 'please'.\"]"
|
||||
]
|
||||
},
|
||||
"execution_count": 28,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"template = \"\"\"Write a list of {num_to_generate} unique messages users might send to a service designed to{purpose} They must each be completely unique.\n",
|
||||
"\n",
|
||||
"1.\"\"\"\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"def parse_list(text: str) -> List[str]:\n",
|
||||
" # Match lines starting with a number then period\n",
|
||||
" # Strip leading and trailing whitespace\n",
|
||||
" matches = re.findall(r\"^\\d+\\. \", text)\n",
|
||||
" return [re.sub(r\"^\\d+\\. \", \"\", q).strip().strip('\"') for q in text.split(\"\\n\")]\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"num_to_generate = 10 # How many examples to use for this test set.\n",
|
||||
"prompt = PromptTemplate.from_template(template)\n",
|
||||
"generation_chain = LLMChain(llm=llm, prompt=prompt)\n",
|
||||
"text = generation_chain.run(purpose=purpose, num_to_generate=num_to_generate)\n",
|
||||
"# Strip preceding numeric bullets\n",
|
||||
"queries = parse_list(text)\n",
|
||||
"queries"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 29,
|
||||
"id": "8dc60f43",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"['{\"task_description\": \"say \\'hello\\'\", \"learning_language\": \"Spanish\", \"native_language\": \"English\", \"full_query\": \"Can you explain how to say \\'hello\\' in Spanish?\"}',\n",
|
||||
" '{\"task_description\": \"understanding the French word for \\'goodbye\\'\", \"learning_language\": \"French\", \"native_language\": \"English\", \"full_query\": \"I need help understanding the French word for \\'goodbye\\'.\"}',\n",
|
||||
" '{\"task_description\": \"say \\'thank you\\'\", \"learning_language\": \"German\", \"native_language\": \"English\", \"full_query\": \"Can you tell me how to say \\'thank you\\' in German?\"}',\n",
|
||||
" '{\"task_description\": \"Learn the Italian word for \\'please\\'\", \"learning_language\": \"Italian\", \"native_language\": \"English\", \"full_query\": \"I\\'m trying to learn the Italian word for \\'please\\'.\"}',\n",
|
||||
" '{\"task_description\": \"Help with pronunciation of \\'yes\\' in Portuguese\", \"learning_language\": \"Portuguese\", \"native_language\": \"English\", \"full_query\": \"Can you help me with the pronunciation of \\'yes\\' in Portuguese?\"}',\n",
|
||||
" '{\"task_description\": \"Find the Dutch word for \\'no\\'\", \"learning_language\": \"Dutch\", \"native_language\": \"English\", \"full_query\": \"I\\'m looking for the Dutch word for \\'no\\'.\"}',\n",
|
||||
" '{\"task_description\": \"Explain the meaning of \\'hello\\' in Japanese\", \"learning_language\": \"Japanese\", \"native_language\": \"English\", \"full_query\": \"Can you explain the meaning of \\'hello\\' in Japanese?\"}',\n",
|
||||
" '{\"task_description\": \"understanding the Russian word for \\'thank you\\'\", \"learning_language\": \"Russian\", \"native_language\": \"English\", \"full_query\": \"I need help understanding the Russian word for \\'thank you\\'.\"}',\n",
|
||||
" '{\"task_description\": \"say goodbye\", \"learning_language\": \"Chinese\", \"native_language\": \"English\", \"full_query\": \"Can you tell me how to say \\'goodbye\\' in Chinese?\"}',\n",
|
||||
" '{\"task_description\": \"Learn the Arabic word for \\'please\\'\", \"learning_language\": \"Arabic\", \"native_language\": \"English\", \"full_query\": \"I\\'m trying to learn the Arabic word for \\'please\\'.\"}']"
|
||||
]
|
||||
},
|
||||
"execution_count": 29,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"# Define the generation chain to get hypotheses\n",
|
||||
"api_chain = OpenAPIEndpointChain.from_api_operation(\n",
|
||||
" operation,\n",
|
||||
" llm,\n",
|
||||
" requests=Requests(),\n",
|
||||
" verbose=verbose,\n",
|
||||
" return_intermediate_steps=True, # Return request and response text\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"predicted_outputs = [api_chain(query) for query in queries]\n",
|
||||
"request_args = [\n",
|
||||
" output[\"intermediate_steps\"][\"request_args\"] for output in predicted_outputs\n",
|
||||
"]\n",
|
||||
"\n",
|
||||
"# Show the generated request\n",
|
||||
"request_args"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 30,
|
||||
"id": "b727e28e",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"## AI Assisted Correction\n",
|
||||
"correction_template = \"\"\"Correct the following API request based on the user's feedback. If the user indicates no changes are needed, output the original without making any changes.\n",
|
||||
"\n",
|
||||
"REQUEST: {request}\n",
|
||||
"\n",
|
||||
"User Feedback / requested changes: {user_feedback}\n",
|
||||
"\n",
|
||||
"Finalized Request: \"\"\"\n",
|
||||
"\n",
|
||||
"prompt = PromptTemplate.from_template(correction_template)\n",
|
||||
"correction_chain = LLMChain(llm=llm, prompt=prompt)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 31,
|
||||
"id": "c1f4d71f",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Query: Can you explain how to say 'hello' in Spanish?\n",
|
||||
"Request: {\"task_description\": \"say 'hello'\", \"learning_language\": \"Spanish\", \"native_language\": \"English\", \"full_query\": \"Can you explain how to say 'hello' in Spanish?\"}\n",
|
||||
"Requested changes: \n",
|
||||
"Query: I need help understanding the French word for 'goodbye'.\n",
|
||||
"Request: {\"task_description\": \"understanding the French word for 'goodbye'\", \"learning_language\": \"French\", \"native_language\": \"English\", \"full_query\": \"I need help understanding the French word for 'goodbye'.\"}\n",
|
||||
"Requested changes: \n",
|
||||
"Query: Can you tell me how to say 'thank you' in German?\n",
|
||||
"Request: {\"task_description\": \"say 'thank you'\", \"learning_language\": \"German\", \"native_language\": \"English\", \"full_query\": \"Can you tell me how to say 'thank you' in German?\"}\n",
|
||||
"Requested changes: \n",
|
||||
"Query: I'm trying to learn the Italian word for 'please'.\n",
|
||||
"Request: {\"task_description\": \"Learn the Italian word for 'please'\", \"learning_language\": \"Italian\", \"native_language\": \"English\", \"full_query\": \"I'm trying to learn the Italian word for 'please'.\"}\n",
|
||||
"Requested changes: \n",
|
||||
"Query: Can you help me with the pronunciation of 'yes' in Portuguese?\n",
|
||||
"Request: {\"task_description\": \"Help with pronunciation of 'yes' in Portuguese\", \"learning_language\": \"Portuguese\", \"native_language\": \"English\", \"full_query\": \"Can you help me with the pronunciation of 'yes' in Portuguese?\"}\n",
|
||||
"Requested changes: \n",
|
||||
"Query: I'm looking for the Dutch word for 'no'.\n",
|
||||
"Request: {\"task_description\": \"Find the Dutch word for 'no'\", \"learning_language\": \"Dutch\", \"native_language\": \"English\", \"full_query\": \"I'm looking for the Dutch word for 'no'.\"}\n",
|
||||
"Requested changes: \n",
|
||||
"Query: Can you explain the meaning of 'hello' in Japanese?\n",
|
||||
"Request: {\"task_description\": \"Explain the meaning of 'hello' in Japanese\", \"learning_language\": \"Japanese\", \"native_language\": \"English\", \"full_query\": \"Can you explain the meaning of 'hello' in Japanese?\"}\n",
|
||||
"Requested changes: \n",
|
||||
"Query: I need help understanding the Russian word for 'thank you'.\n",
|
||||
"Request: {\"task_description\": \"understanding the Russian word for 'thank you'\", \"learning_language\": \"Russian\", \"native_language\": \"English\", \"full_query\": \"I need help understanding the Russian word for 'thank you'.\"}\n",
|
||||
"Requested changes: \n",
|
||||
"Query: Can you tell me how to say 'goodbye' in Chinese?\n",
|
||||
"Request: {\"task_description\": \"say goodbye\", \"learning_language\": \"Chinese\", \"native_language\": \"English\", \"full_query\": \"Can you tell me how to say 'goodbye' in Chinese?\"}\n",
|
||||
"Requested changes: \n",
|
||||
"Query: I'm trying to learn the Arabic word for 'please'.\n",
|
||||
"Request: {\"task_description\": \"Learn the Arabic word for 'please'\", \"learning_language\": \"Arabic\", \"native_language\": \"English\", \"full_query\": \"I'm trying to learn the Arabic word for 'please'.\"}\n",
|
||||
"Requested changes: \n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"ground_truth = []\n",
|
||||
"for query, request_arg in list(zip(queries, request_args)):\n",
|
||||
" feedback = input(f\"Query: {query}\\nRequest: {request_arg}\\nRequested changes: \")\n",
|
||||
" if feedback == \"n\" or feedback == \"none\" or not feedback:\n",
|
||||
" ground_truth.append(request_arg)\n",
|
||||
" continue\n",
|
||||
" resolved = correction_chain.run(request=request_arg, user_feedback=feedback)\n",
|
||||
" ground_truth.append(resolved.strip())\n",
|
||||
" print(\"Updated request:\", resolved)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "19d68882",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"**Now you can use the `ground_truth` as shown above in [Evaluate the Requests Chain](#Evaluate-the-requests-chain)!**"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 32,
|
||||
"id": "5a596176",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"['{\"task_description\": \"say \\'hello\\'\", \"learning_language\": \"Spanish\", \"native_language\": \"English\", \"full_query\": \"Can you explain how to say \\'hello\\' in Spanish?\"}',\n",
|
||||
" '{\"task_description\": \"understanding the French word for \\'goodbye\\'\", \"learning_language\": \"French\", \"native_language\": \"English\", \"full_query\": \"I need help understanding the French word for \\'goodbye\\'.\"}',\n",
|
||||
" '{\"task_description\": \"say \\'thank you\\'\", \"learning_language\": \"German\", \"native_language\": \"English\", \"full_query\": \"Can you tell me how to say \\'thank you\\' in German?\"}',\n",
|
||||
" '{\"task_description\": \"Learn the Italian word for \\'please\\'\", \"learning_language\": \"Italian\", \"native_language\": \"English\", \"full_query\": \"I\\'m trying to learn the Italian word for \\'please\\'.\"}',\n",
|
||||
" '{\"task_description\": \"Help with pronunciation of \\'yes\\' in Portuguese\", \"learning_language\": \"Portuguese\", \"native_language\": \"English\", \"full_query\": \"Can you help me with the pronunciation of \\'yes\\' in Portuguese?\"}',\n",
|
||||
" '{\"task_description\": \"Find the Dutch word for \\'no\\'\", \"learning_language\": \"Dutch\", \"native_language\": \"English\", \"full_query\": \"I\\'m looking for the Dutch word for \\'no\\'.\"}',\n",
|
||||
" '{\"task_description\": \"Explain the meaning of \\'hello\\' in Japanese\", \"learning_language\": \"Japanese\", \"native_language\": \"English\", \"full_query\": \"Can you explain the meaning of \\'hello\\' in Japanese?\"}',\n",
|
||||
" '{\"task_description\": \"understanding the Russian word for \\'thank you\\'\", \"learning_language\": \"Russian\", \"native_language\": \"English\", \"full_query\": \"I need help understanding the Russian word for \\'thank you\\'.\"}',\n",
|
||||
" '{\"task_description\": \"say goodbye\", \"learning_language\": \"Chinese\", \"native_language\": \"English\", \"full_query\": \"Can you tell me how to say \\'goodbye\\' in Chinese?\"}',\n",
|
||||
" '{\"task_description\": \"Learn the Arabic word for \\'please\\'\", \"learning_language\": \"Arabic\", \"native_language\": \"English\", \"full_query\": \"I\\'m trying to learn the Arabic word for \\'please\\'.\"}']"
|
||||
]
|
||||
},
|
||||
"execution_count": 32,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"# Now you have a new ground truth set to use as shown above!\n",
|
||||
"ground_truth"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "b7fe9dfa",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.11.2"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
@ -1,372 +0,0 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "984169ca",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Question Answering Benchmarking: Paul Graham Essay\n",
|
||||
"\n",
|
||||
"Here we go over how to benchmark performance on a question answering task over a Paul Graham essay.\n",
|
||||
"\n",
|
||||
"It is highly recommended that you do any evaluation/benchmarking with tracing enabled. See [here](https://python.langchain.com/docs/modules/callbacks/how_to/tracing) for an explanation of what tracing is and how to set it up."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "8a16b75d",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Loading the data\n",
|
||||
"First, let's load the data."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"id": "5b2d5e98",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stderr",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Found cached dataset json (/Users/harrisonchase/.cache/huggingface/datasets/LangChainDatasets___json/LangChainDatasets--question-answering-paul-graham-76e8f711e038d742/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51)\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"application/vnd.jupyter.widget-view+json": {
|
||||
"model_id": "9264acfe710b4faabf060f0fcf4f7308",
|
||||
"version_major": 2,
|
||||
"version_minor": 0
|
||||
},
|
||||
"text/plain": [
|
||||
" 0%| | 0/1 [00:00<?, ?it/s]"
|
||||
]
|
||||
},
|
||||
"metadata": {},
|
||||
"output_type": "display_data"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain.evaluation.loading import load_dataset\n",
|
||||
"\n",
|
||||
"dataset = load_dataset(\"question-answering-paul-graham\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "4ab6a716",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Setting up a chain\n",
|
||||
"Now we need to create some pipelines for doing question answering. Step one in that is creating an index over the data in question."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"id": "c18680b5",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.document_loaders import TextLoader\n",
|
||||
"\n",
|
||||
"loader = TextLoader(\"../../modules/paul_graham_essay.txt\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"id": "7f0de2b3",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.indexes import VectorstoreIndexCreator"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"id": "ef84ff99",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Running Chroma using direct local API.\n",
|
||||
"Using DuckDB in-memory for database. Data will be transient.\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"vectorstore = VectorstoreIndexCreator().from_loaders([loader]).vectorstore"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "f0b5d8f6",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Now we can create a question answering chain."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"id": "8843cb0c",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.chains import RetrievalQA\n",
|
||||
"from langchain.llms import OpenAI"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"id": "573719a0",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"chain = RetrievalQA.from_chain_type(\n",
|
||||
" llm=OpenAI(),\n",
|
||||
" chain_type=\"stuff\",\n",
|
||||
" retriever=vectorstore.as_retriever(),\n",
|
||||
" input_key=\"question\",\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "53b5aa23",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Make a prediction\n",
|
||||
"\n",
|
||||
"First, we can make predictions one datapoint at a time. Doing it at this level of granularity allows use to explore the outputs in detail, and also is a lot cheaper than running over multiple datapoints"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 18,
|
||||
"id": "3f81d951",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"{'question': 'What were the two main things the author worked on before college?',\n",
|
||||
" 'answer': 'The two main things the author worked on before college were writing and programming.',\n",
|
||||
" 'result': ' Writing and programming.'}"
|
||||
]
|
||||
},
|
||||
"execution_count": 18,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"chain(dataset[0])"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "d0c16cd7",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Make many predictions\n",
|
||||
"Now we can make predictions"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 9,
|
||||
"id": "24b4c66e",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"predictions = chain.apply(dataset)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "49d969fb",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Evaluate performance\n",
|
||||
"Now we can evaluate the predictions. The first thing we can do is look at them by eye."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 10,
|
||||
"id": "1d583f03",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"{'question': 'What were the two main things the author worked on before college?',\n",
|
||||
" 'answer': 'The two main things the author worked on before college were writing and programming.',\n",
|
||||
" 'result': ' Writing and programming.'}"
|
||||
]
|
||||
},
|
||||
"execution_count": 10,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"predictions[0]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "4783344b",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Next, we can use a language model to score them programatically"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 11,
|
||||
"id": "d0a9341d",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.evaluation.qa import QAEvalChain"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 12,
|
||||
"id": "1612dec1",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"llm = OpenAI(temperature=0)\n",
|
||||
"eval_chain = QAEvalChain.from_llm(llm)\n",
|
||||
"graded_outputs = eval_chain.evaluate(\n",
|
||||
" dataset, predictions, question_key=\"question\", prediction_key=\"result\"\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "79587806",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"We can add in the graded output to the `predictions` dict and then get a count of the grades."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 13,
|
||||
"id": "2a689df5",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"for i, prediction in enumerate(predictions):\n",
|
||||
" prediction[\"grade\"] = graded_outputs[i][\"text\"]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 14,
|
||||
"id": "27b61215",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"Counter({' CORRECT': 12, ' INCORRECT': 10})"
|
||||
]
|
||||
},
|
||||
"execution_count": 14,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from collections import Counter\n",
|
||||
"\n",
|
||||
"Counter([pred[\"grade\"] for pred in predictions])"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "12fe30f4",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"We can also filter the datapoints to the incorrect examples and look at them."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 15,
|
||||
"id": "47c692a1",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"incorrect = [pred for pred in predictions if pred[\"grade\"] == \" INCORRECT\"]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 16,
|
||||
"id": "0ef976c1",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"{'question': 'What did the author write their dissertation on?',\n",
|
||||
" 'answer': 'The author wrote their dissertation on applications of continuations.',\n",
|
||||
" 'result': ' The author does not mention what their dissertation was on, so it is not known.',\n",
|
||||
" 'grade': ' INCORRECT'}"
|
||||
]
|
||||
},
|
||||
"execution_count": 16,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"incorrect[0]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "7710401a",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.11.2"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
@ -1,385 +0,0 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "984169ca",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Question Answering Benchmarking: State of the Union Address\n",
|
||||
"\n",
|
||||
"Here we go over how to benchmark performance on a question answering task over a state of the union address.\n",
|
||||
"\n",
|
||||
"It is highly reccomended that you do any evaluation/benchmarking with tracing enabled. See [here](https://langchain.readthedocs.io/en/latest/tracing.html) for an explanation of what tracing is and how to set it up."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"id": "f127fb04",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Comment this out if you are NOT using tracing\n",
|
||||
"import os\n",
|
||||
"\n",
|
||||
"os.environ[\"LANGCHAIN_HANDLER\"] = \"langchain\""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "8a16b75d",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Loading the data\n",
|
||||
"First, let's load the data."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"id": "5b2d5e98",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stderr",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Found cached dataset json (/Users/harrisonchase/.cache/huggingface/datasets/LangChainDatasets___json/LangChainDatasets--question-answering-state-of-the-union-a7e5a3b2db4f440d/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51)\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"application/vnd.jupyter.widget-view+json": {
|
||||
"model_id": "",
|
||||
"version_major": 2,
|
||||
"version_minor": 0
|
||||
},
|
||||
"text/plain": [
|
||||
" 0%| | 0/1 [00:00<?, ?it/s]"
|
||||
]
|
||||
},
|
||||
"metadata": {},
|
||||
"output_type": "display_data"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain.evaluation.loading import load_dataset\n",
|
||||
"\n",
|
||||
"dataset = load_dataset(\"question-answering-state-of-the-union\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "4ab6a716",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Setting up a chain\n",
|
||||
"Now we need to create some pipelines for doing question answering. Step one in that is creating an index over the data in question."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"id": "c18680b5",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.document_loaders import TextLoader\n",
|
||||
"\n",
|
||||
"loader = TextLoader(\"../../modules/state_of_the_union.txt\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"id": "7f0de2b3",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.indexes import VectorstoreIndexCreator"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"id": "ef84ff99",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Running Chroma using direct local API.\n",
|
||||
"Using DuckDB in-memory for database. Data will be transient.\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"vectorstore = VectorstoreIndexCreator().from_loaders([loader]).vectorstore"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "f0b5d8f6",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Now we can create a question answering chain."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"id": "8843cb0c",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.chains import RetrievalQA\n",
|
||||
"from langchain.llms import OpenAI"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"id": "573719a0",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"chain = RetrievalQA.from_chain_type(\n",
|
||||
" llm=OpenAI(),\n",
|
||||
" chain_type=\"stuff\",\n",
|
||||
" retriever=vectorstore.as_retriever(),\n",
|
||||
" input_key=\"question\",\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "37d669e9",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Make a prediction\n",
|
||||
"\n",
|
||||
"First, we can make predictions one datapoint at a time. Doing it at this level of granularity allows use to explore the outputs in detail, and also is a lot cheaper than running over multiple datapoints"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 17,
|
||||
"id": "3089e409",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"{'question': 'What is the purpose of the NATO Alliance?',\n",
|
||||
" 'answer': 'The purpose of the NATO Alliance is to secure peace and stability in Europe after World War 2.',\n",
|
||||
" 'result': ' The NATO Alliance was created to secure peace and stability in Europe after World War 2.'}"
|
||||
]
|
||||
},
|
||||
"execution_count": 17,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"chain(dataset[0])"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "d0c16cd7",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Make many predictions\n",
|
||||
"Now we can make predictions"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"id": "24b4c66e",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"predictions = chain.apply(dataset)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "49d969fb",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Evaluate performance\n",
|
||||
"Now we can evaluate the predictions. The first thing we can do is look at them by eye."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 8,
|
||||
"id": "1d583f03",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"{'question': 'What is the purpose of the NATO Alliance?',\n",
|
||||
" 'answer': 'The purpose of the NATO Alliance is to secure peace and stability in Europe after World War 2.',\n",
|
||||
" 'result': ' The purpose of the NATO Alliance is to secure peace and stability in Europe after World War 2.'}"
|
||||
]
|
||||
},
|
||||
"execution_count": 8,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"predictions[0]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "4783344b",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Next, we can use a language model to score them programatically"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 9,
|
||||
"id": "d0a9341d",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.evaluation.qa import QAEvalChain"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 10,
|
||||
"id": "1612dec1",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"llm = OpenAI(temperature=0)\n",
|
||||
"eval_chain = QAEvalChain.from_llm(llm)\n",
|
||||
"graded_outputs = eval_chain.evaluate(\n",
|
||||
" dataset, predictions, question_key=\"question\", prediction_key=\"result\"\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "79587806",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"We can add in the graded output to the `predictions` dict and then get a count of the grades."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 11,
|
||||
"id": "2a689df5",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"for i, prediction in enumerate(predictions):\n",
|
||||
" prediction[\"grade\"] = graded_outputs[i][\"text\"]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 12,
|
||||
"id": "27b61215",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"Counter({' CORRECT': 7, ' INCORRECT': 4})"
|
||||
]
|
||||
},
|
||||
"execution_count": 12,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from collections import Counter\n",
|
||||
"\n",
|
||||
"Counter([pred[\"grade\"] for pred in predictions])"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "12fe30f4",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"We can also filter the datapoints to the incorrect examples and look at them."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 13,
|
||||
"id": "47c692a1",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"incorrect = [pred for pred in predictions if pred[\"grade\"] == \" INCORRECT\"]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 14,
|
||||
"id": "0ef976c1",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"{'question': 'What is the U.S. Department of Justice doing to combat the crimes of Russian oligarchs?',\n",
|
||||
" 'answer': 'The U.S. Department of Justice is assembling a dedicated task force to go after the crimes of Russian oligarchs.',\n",
|
||||
" 'result': ' The U.S. Department of Justice is assembling a dedicated task force to go after the crimes of Russian oligarchs and is naming a chief prosecutor for pandemic fraud.',\n",
|
||||
" 'grade': ' INCORRECT'}"
|
||||
]
|
||||
},
|
||||
"execution_count": 14,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"incorrect[0]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "7710401a",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.9.1"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
@ -1,118 +0,0 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "ee2a3a21",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# QA Generation\n",
|
||||
"This notebook shows how to use the `QAGenerationChain` to come up with question-answer pairs over a specific document.\n",
|
||||
"This is important because often times you may not have data to evaluate your question-answer system over, so this is a cheap and lightweight way to generate it!"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"id": "33d3f0b4",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.document_loaders import TextLoader"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"id": "2029a29c",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"loader = TextLoader(\"../../modules/state_of_the_union.txt\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"id": "87edb84c",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"doc = loader.load()[0]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"id": "04125b6d",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.chat_models import ChatOpenAI\n",
|
||||
"from langchain.chains import QAGenerationChain\n",
|
||||
"\n",
|
||||
"chain = QAGenerationChain.from_llm(ChatOpenAI(temperature=0))"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 8,
|
||||
"id": "4f1593e4",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"qa = chain.run(doc.page_content)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 10,
|
||||
"id": "ee831f92",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"{'question': 'What is the U.S. Department of Justice doing to combat the crimes of Russian oligarchs?',\n",
|
||||
" 'answer': 'The U.S. Department of Justice is assembling a dedicated task force to go after the crimes of Russian oligarchs.'}"
|
||||
]
|
||||
},
|
||||
"execution_count": 10,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"qa[1]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "7028754e",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.9.1"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
@ -1,445 +0,0 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "480b7cf8",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Question Answering\n",
|
||||
"\n",
|
||||
"This notebook covers how to evaluate generic question answering problems. This is a situation where you have an example containing a question and its corresponding ground truth answer, and you want to measure how well the language model does at answering those questions."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "78e3023b",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Setup\n",
|
||||
"\n",
|
||||
"For demonstration purposes, we will just evaluate a simple question answering system that only evaluates the model's internal knowledge. Please see other notebooks for examples where it evaluates how the model does at question answering over data not present in what the model was trained on."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"id": "96710d50",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.prompts import PromptTemplate\n",
|
||||
"from langchain.chains import LLMChain\n",
|
||||
"from langchain.llms import OpenAI"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"id": "e33ccf00",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"prompt = PromptTemplate(\n",
|
||||
" template=\"Question: {question}\\nAnswer:\", input_variables=[\"question\"]\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"id": "172d993a",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"llm = OpenAI(model_name=\"text-davinci-003\", temperature=0)\n",
|
||||
"chain = LLMChain(llm=llm, prompt=prompt)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "0c584440",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Examples\n",
|
||||
"For this purpose, we will just use two simple hardcoded examples, but see other notebooks for tips on how to get and/or generate these examples."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"id": "87de1d84",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"examples = [\n",
|
||||
" {\n",
|
||||
" \"question\": \"Roger has 5 tennis balls. He buys 2 more cans of tennis balls. Each can has 3 tennis balls. How many tennis balls does he have now?\",\n",
|
||||
" \"answer\": \"11\",\n",
|
||||
" },\n",
|
||||
" {\n",
|
||||
" \"question\": 'Is the following sentence plausible? \"Joao Moutinho caught the screen pass in the NFC championship.\"',\n",
|
||||
" \"answer\": \"No\",\n",
|
||||
" },\n",
|
||||
"]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "143b1155",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Predictions\n",
|
||||
"\n",
|
||||
"We can now make and inspect the predictions for these questions."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"id": "c7bd809c",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"predictions = chain.apply(examples)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"id": "f06dceab",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"[{'text': ' 11 tennis balls'},\n",
|
||||
" {'text': ' No, this sentence is not plausible. Joao Moutinho is a professional soccer player, not an American football player, so it is not likely that he would be catching a screen pass in the NFC championship.'}]"
|
||||
]
|
||||
},
|
||||
"execution_count": 6,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"predictions"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "45cc2f9d",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Evaluation\n",
|
||||
"\n",
|
||||
"We can see that if we tried to just do exact match on the answer answers (`11` and `No`) they would not match what the language model answered. However, semantically the language model is correct in both cases. In order to account for this, we can use a language model itself to evaluate the answers."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"id": "0cacc65a",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.evaluation.qa import QAEvalChain"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 8,
|
||||
"id": "5aa6cd65",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"llm = OpenAI(temperature=0)\n",
|
||||
"eval_chain = QAEvalChain.from_llm(llm)\n",
|
||||
"graded_outputs = eval_chain.evaluate(\n",
|
||||
" examples, predictions, question_key=\"question\", prediction_key=\"text\"\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 9,
|
||||
"id": "63780020",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Example 0:\n",
|
||||
"Question: Roger has 5 tennis balls. He buys 2 more cans of tennis balls. Each can has 3 tennis balls. How many tennis balls does he have now?\n",
|
||||
"Real Answer: 11\n",
|
||||
"Predicted Answer: 11 tennis balls\n",
|
||||
"Predicted Grade: CORRECT\n",
|
||||
"\n",
|
||||
"Example 1:\n",
|
||||
"Question: Is the following sentence plausible? \"Joao Moutinho caught the screen pass in the NFC championship.\"\n",
|
||||
"Real Answer: No\n",
|
||||
"Predicted Answer: No, this sentence is not plausible. Joao Moutinho is a professional soccer player, not an American football player, so it is not likely that he would be catching a screen pass in the NFC championship.\n",
|
||||
"Predicted Grade: CORRECT\n",
|
||||
"\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"for i, eg in enumerate(examples):\n",
|
||||
" print(f\"Example {i}:\")\n",
|
||||
" print(\"Question: \" + eg[\"question\"])\n",
|
||||
" print(\"Real Answer: \" + eg[\"answer\"])\n",
|
||||
" print(\"Predicted Answer: \" + predictions[i][\"text\"])\n",
|
||||
" print(\"Predicted Grade: \" + graded_outputs[i][\"text\"])\n",
|
||||
" print()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "782ae8c8",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Customize Prompt\n",
|
||||
"\n",
|
||||
"You can also customize the prompt that is used. Here is an example prompting it using a score from 0 to 10.\n",
|
||||
"The custom prompt requires 3 input variables: \"query\", \"answer\" and \"result\". Where \"query\" is the question, \"answer\" is the ground truth answer, and \"result\" is the predicted answer."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "153425c4",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.prompts.prompt import PromptTemplate\n",
|
||||
"\n",
|
||||
"_PROMPT_TEMPLATE = \"\"\"You are an expert professor specialized in grading students' answers to questions.\n",
|
||||
"You are grading the following question:\n",
|
||||
"{query}\n",
|
||||
"Here is the real answer:\n",
|
||||
"{answer}\n",
|
||||
"You are grading the following predicted answer:\n",
|
||||
"{result}\n",
|
||||
"What grade do you give from 0 to 10, where 0 is the lowest (very low similarity) and 10 is the highest (very high similarity)?\n",
|
||||
"\"\"\"\n",
|
||||
"\n",
|
||||
"PROMPT = PromptTemplate(\n",
|
||||
" input_variables=[\"query\", \"answer\", \"result\"], template=_PROMPT_TEMPLATE\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "0a3b0fb7",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"evalchain = QAEvalChain.from_llm(llm=llm, prompt=PROMPT)\n",
|
||||
"evalchain.evaluate(\n",
|
||||
" examples,\n",
|
||||
" predictions,\n",
|
||||
" question_key=\"question\",\n",
|
||||
" answer_key=\"answer\",\n",
|
||||
" prediction_key=\"text\",\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "cb1cf335",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Evaluation without Ground Truth\n",
|
||||
"Its possible to evaluate question answering systems without ground truth. You would need a `\"context\"` input that reflects what the information the LLM uses to answer the question. This context can be obtained by any retreival system. Here's an example of how it works:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "6c59293f",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"context_examples = [\n",
|
||||
" {\n",
|
||||
" \"question\": \"How old am I?\",\n",
|
||||
" \"context\": \"I am 30 years old. I live in New York and take the train to work everyday.\",\n",
|
||||
" },\n",
|
||||
" {\n",
|
||||
" \"question\": 'Who won the NFC championship game in 2023?\"',\n",
|
||||
" \"context\": \"NFC Championship Game 2023: Philadelphia Eagles 31, San Francisco 49ers 7\",\n",
|
||||
" },\n",
|
||||
"]\n",
|
||||
"QA_PROMPT = \"Answer the question based on the context\\nContext:{context}\\nQuestion:{question}\\nAnswer:\"\n",
|
||||
"template = PromptTemplate(input_variables=[\"context\", \"question\"], template=QA_PROMPT)\n",
|
||||
"qa_chain = LLMChain(llm=llm, prompt=template)\n",
|
||||
"predictions = qa_chain.apply(context_examples)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 12,
|
||||
"id": "e500d0cc",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"[{'text': 'You are 30 years old.'},\n",
|
||||
" {'text': ' The Philadelphia Eagles won the NFC championship game in 2023.'}]"
|
||||
]
|
||||
},
|
||||
"execution_count": 12,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"predictions"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 9,
|
||||
"id": "6d8cbc1d",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.evaluation.qa import ContextQAEvalChain\n",
|
||||
"\n",
|
||||
"eval_chain = ContextQAEvalChain.from_llm(llm)\n",
|
||||
"graded_outputs = eval_chain.evaluate(\n",
|
||||
" context_examples, predictions, question_key=\"question\", prediction_key=\"text\"\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 13,
|
||||
"id": "6c5262d0",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"[{'text': ' CORRECT'}, {'text': ' CORRECT'}]"
|
||||
]
|
||||
},
|
||||
"execution_count": 13,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"graded_outputs"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "aaa61f0c",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Comparing to other evaluation metrics\n",
|
||||
"We can compare the evaluation results we get to other common evaluation metrics. To do this, let's load some evaluation metrics from HuggingFace's `evaluate` package."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 10,
|
||||
"id": "d851453b",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Some data munging to get the examples in the right format\n",
|
||||
"for i, eg in enumerate(examples):\n",
|
||||
" eg[\"id\"] = str(i)\n",
|
||||
" eg[\"answers\"] = {\"text\": [eg[\"answer\"]], \"answer_start\": [0]}\n",
|
||||
" predictions[i][\"id\"] = str(i)\n",
|
||||
" predictions[i][\"prediction_text\"] = predictions[i][\"text\"]\n",
|
||||
"\n",
|
||||
"for p in predictions:\n",
|
||||
" del p[\"text\"]\n",
|
||||
"\n",
|
||||
"new_examples = examples.copy()\n",
|
||||
"for eg in new_examples:\n",
|
||||
" del eg[\"question\"]\n",
|
||||
" del eg[\"answer\"]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 11,
|
||||
"id": "c38eb3e9",
|
||||
"metadata": {
|
||||
"scrolled": true
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from evaluate import load\n",
|
||||
"\n",
|
||||
"squad_metric = load(\"squad\")\n",
|
||||
"results = squad_metric.compute(\n",
|
||||
" references=new_examples,\n",
|
||||
" predictions=predictions,\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 12,
|
||||
"id": "07d68f85",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"{'exact_match': 0.0, 'f1': 28.125}"
|
||||
]
|
||||
},
|
||||
"execution_count": 12,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"results"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "3b775150",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.9.16"
|
||||
},
|
||||
"vscode": {
|
||||
"interpreter": {
|
||||
"hash": "53f3bc57609c7a84333bb558594977aa5b4026b1d6070b93987956689e367341"
|
||||
}
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
@ -1,428 +0,0 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "984169ca",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# SQL Question Answering Benchmarking: Chinook\n",
|
||||
"\n",
|
||||
"Here we go over how to benchmark performance on a question answering task over a SQL database.\n",
|
||||
"\n",
|
||||
"It is highly reccomended that you do any evaluation/benchmarking with tracing enabled. See [here](https://langchain.readthedocs.io/en/latest/tracing.html) for an explanation of what tracing is and how to set it up."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 28,
|
||||
"id": "44874486",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Comment this out if you are NOT using tracing\n",
|
||||
"import os\n",
|
||||
"\n",
|
||||
"os.environ[\"LANGCHAIN_HANDLER\"] = \"langchain\""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "0f66405e",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Loading the data\n",
|
||||
"\n",
|
||||
"First, let's load the data."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"id": "0df1393f",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"application/vnd.jupyter.widget-view+json": {
|
||||
"model_id": "b220d07ee5d14909bc842b4545cdc0de",
|
||||
"version_major": 2,
|
||||
"version_minor": 0
|
||||
},
|
||||
"text/plain": [
|
||||
"Downloading readme: 0%| | 0.00/21.0 [00:00<?, ?B/s]"
|
||||
]
|
||||
},
|
||||
"metadata": {},
|
||||
"output_type": "display_data"
|
||||
},
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Downloading and preparing dataset json/LangChainDatasets--sql-qa-chinook to /Users/harrisonchase/.cache/huggingface/datasets/LangChainDatasets___json/LangChainDatasets--sql-qa-chinook-7528565d2d992b47/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51...\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"application/vnd.jupyter.widget-view+json": {
|
||||
"model_id": "e89e3c8ef76f49889c4b39c624828c71",
|
||||
"version_major": 2,
|
||||
"version_minor": 0
|
||||
},
|
||||
"text/plain": [
|
||||
"Downloading data files: 0%| | 0/1 [00:00<?, ?it/s]"
|
||||
]
|
||||
},
|
||||
"metadata": {},
|
||||
"output_type": "display_data"
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"application/vnd.jupyter.widget-view+json": {
|
||||
"model_id": "a8421df6c26045e8978c7086cb418222",
|
||||
"version_major": 2,
|
||||
"version_minor": 0
|
||||
},
|
||||
"text/plain": [
|
||||
"Downloading data: 0%| | 0.00/1.44k [00:00<?, ?B/s]"
|
||||
]
|
||||
},
|
||||
"metadata": {},
|
||||
"output_type": "display_data"
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"application/vnd.jupyter.widget-view+json": {
|
||||
"model_id": "d1fb6becc3324a85bf039a53caf30924",
|
||||
"version_major": 2,
|
||||
"version_minor": 0
|
||||
},
|
||||
"text/plain": [
|
||||
"Extracting data files: 0%| | 0/1 [00:00<?, ?it/s]"
|
||||
]
|
||||
},
|
||||
"metadata": {},
|
||||
"output_type": "display_data"
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"application/vnd.jupyter.widget-view+json": {
|
||||
"model_id": "",
|
||||
"version_major": 2,
|
||||
"version_minor": 0
|
||||
},
|
||||
"text/plain": [
|
||||
"Generating train split: 0 examples [00:00, ? examples/s]"
|
||||
]
|
||||
},
|
||||
"metadata": {},
|
||||
"output_type": "display_data"
|
||||
},
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Dataset json downloaded and prepared to /Users/harrisonchase/.cache/huggingface/datasets/LangChainDatasets___json/LangChainDatasets--sql-qa-chinook-7528565d2d992b47/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51. Subsequent calls will reuse this data.\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"application/vnd.jupyter.widget-view+json": {
|
||||
"model_id": "9d68ad1b3e4a4bd79f92597aac4d3cc9",
|
||||
"version_major": 2,
|
||||
"version_minor": 0
|
||||
},
|
||||
"text/plain": [
|
||||
" 0%| | 0/1 [00:00<?, ?it/s]"
|
||||
]
|
||||
},
|
||||
"metadata": {},
|
||||
"output_type": "display_data"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain.evaluation.loading import load_dataset\n",
|
||||
"\n",
|
||||
"dataset = load_dataset(\"sql-qa-chinook\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 8,
|
||||
"id": "ab44d504",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"{'question': 'How many employees are there?', 'answer': '8'}"
|
||||
]
|
||||
},
|
||||
"execution_count": 8,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"dataset[0]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "8a16b75d",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Setting up a chain\n",
|
||||
"This uses the example Chinook database.\n",
|
||||
"To set it up follow the instructions on https://database.guide/2-sample-databases-sqlite/, placing the `.db` file in a notebooks folder at the root of this repository.\n",
|
||||
"\n",
|
||||
"Note that here we load a simple chain. If you want to experiment with more complex chains, or an agent, just create the `chain` object in a different way."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"id": "5b2d5e98",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain import OpenAI, SQLDatabase, SQLDatabaseChain"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"id": "33cdcbfc",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"db = SQLDatabase.from_uri(\"sqlite:///../../../notebooks/Chinook.db\")\n",
|
||||
"llm = OpenAI(temperature=0)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "f0b5d8f6",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Now we can create a SQL database chain."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 14,
|
||||
"id": "8843cb0c",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"chain = SQLDatabaseChain.from_llm(llm, db, input_key=\"question\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "6c0062e7",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Make a prediction\n",
|
||||
"\n",
|
||||
"First, we can make predictions one datapoint at a time. Doing it at this level of granularity allows use to explore the outputs in detail, and also is a lot cheaper than running over multiple datapoints"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 27,
|
||||
"id": "d28c5e7d",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"{'question': 'How many employees are there?',\n",
|
||||
" 'answer': '8',\n",
|
||||
" 'result': ' There are 8 employees.'}"
|
||||
]
|
||||
},
|
||||
"execution_count": 27,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"chain(dataset[0])"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "d0c16cd7",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Make many predictions\n",
|
||||
"Now we can make predictions. Note that we add a try-except because this chain can sometimes error (if SQL is written incorrectly, etc)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 19,
|
||||
"id": "24b4c66e",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"predictions = []\n",
|
||||
"predicted_dataset = []\n",
|
||||
"error_dataset = []\n",
|
||||
"for data in dataset:\n",
|
||||
" try:\n",
|
||||
" predictions.append(chain(data))\n",
|
||||
" predicted_dataset.append(data)\n",
|
||||
" except:\n",
|
||||
" error_dataset.append(data)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "4783344b",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Evaluate performance\n",
|
||||
"Now we can evaluate the predictions. We can use a language model to score them programatically"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 21,
|
||||
"id": "d0a9341d",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.evaluation.qa import QAEvalChain"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 22,
|
||||
"id": "1612dec1",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"llm = OpenAI(temperature=0)\n",
|
||||
"eval_chain = QAEvalChain.from_llm(llm)\n",
|
||||
"graded_outputs = eval_chain.evaluate(\n",
|
||||
" predicted_dataset, predictions, question_key=\"question\", prediction_key=\"result\"\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "79587806",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"We can add in the graded output to the `predictions` dict and then get a count of the grades."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 23,
|
||||
"id": "2a689df5",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"for i, prediction in enumerate(predictions):\n",
|
||||
" prediction[\"grade\"] = graded_outputs[i][\"text\"]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 24,
|
||||
"id": "27b61215",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"Counter({' CORRECT': 3, ' INCORRECT': 4})"
|
||||
]
|
||||
},
|
||||
"execution_count": 24,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from collections import Counter\n",
|
||||
"\n",
|
||||
"Counter([pred[\"grade\"] for pred in predictions])"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "12fe30f4",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"We can also filter the datapoints to the incorrect examples and look at them."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 25,
|
||||
"id": "47c692a1",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"incorrect = [pred for pred in predictions if pred[\"grade\"] == \" INCORRECT\"]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 26,
|
||||
"id": "0ef976c1",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"{'question': 'How many employees are also customers?',\n",
|
||||
" 'answer': 'None',\n",
|
||||
" 'result': ' 59 employees are also customers.',\n",
|
||||
" 'grade': ' INCORRECT'}"
|
||||
]
|
||||
},
|
||||
"execution_count": 26,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"incorrect[0]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "7710401a",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.11.3"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
318
docs/extras/guides/evaluation/string/Untitled.ipynb
Normal file
318
docs/extras/guides/evaluation/string/Untitled.ipynb
Normal file
@ -0,0 +1,318 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "bce7335e-f3b2-44f3-90cc-8c0a23a89a21",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import os\n",
|
||||
"from langchain.agents import load_tools\n",
|
||||
"from langchain.agents import initialize_agent\n",
|
||||
"from langchain.chat_models import ChatOpenAI\n",
|
||||
"from langchain.utilities import GoogleSearchAPIWrapper\n",
|
||||
"from langchain.schema import (\n",
|
||||
" SystemMessage,\n",
|
||||
" HumanMessage,\n",
|
||||
" AIMessage\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"# os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"\n",
|
||||
"# os.environ[\"LANGCHAIN_ENDPOINT\"] = \"https://api.smith.langchain.com\"\n",
|
||||
"# os.environ[\"LANGCHAIN_API_KEY\"] = \"******\"\n",
|
||||
"# os.environ[\"LANGCHAIN_PROJECT\"] = \"Jarvis\"\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"prefix_messages = [{\"role\": \"system\", \"content\": \"You are a helpful discord Chatbot.\"}]\n",
|
||||
"\n",
|
||||
"llm = ChatOpenAI(model_name='gpt-3.5-turbo', \n",
|
||||
" temperature=0.5, \n",
|
||||
" max_tokens = 2000)\n",
|
||||
"tools = load_tools([\"serpapi\", \"llm-math\"], llm=llm)\n",
|
||||
"agent = initialize_agent(tools,\n",
|
||||
" llm,\n",
|
||||
" agent=\"zero-shot-react-description\",\n",
|
||||
" verbose=True,\n",
|
||||
" handle_parsing_errors=True\n",
|
||||
" )\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"async def on_ready():\n",
|
||||
" print(f'{bot.user} has connected to Discord!')\n",
|
||||
"\n",
|
||||
"async def on_message(message):\n",
|
||||
"\n",
|
||||
" print(\"Detected bot name in message:\", message.content)\n",
|
||||
"\n",
|
||||
" # Capture the output of agent.run() in the response variable\n",
|
||||
" response = agent.run(message.content)\n",
|
||||
"\n",
|
||||
" while response:\n",
|
||||
" print(response)\n",
|
||||
" chunk, response = response[:2000], response[2000:]\n",
|
||||
" print(f\"Chunk: {chunk}\")\n",
|
||||
" print(\"Response sent.\")\n",
|
||||
"\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 22,
|
||||
"id": "1551ce9f-b6de-4035-b6d6-825722823b48",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from dataclasses import dataclass\n",
|
||||
"@dataclass\n",
|
||||
"class Message:\n",
|
||||
" content: str"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 23,
|
||||
"id": "6e6859ec-8544-4407-9663-6b53c0092903",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Detected bot name in message: Hi AI, how are you today?\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
|
||||
"\u001b[32;1m\u001b[1;3mThis question is not something that can be answered using the available tools.\n",
|
||||
"Action: N/A\u001b[0m\n",
|
||||
"Observation: Invalid Format: Missing 'Action Input:' after 'Action:'\n",
|
||||
"Thought:\u001b[32;1m\u001b[1;3mI need to follow the correct format for answering questions.\n",
|
||||
"Action: N/A\u001b[0m\n",
|
||||
"Observation: Invalid Format: Missing 'Action Input:' after 'Action:'\n",
|
||||
"Thought:\u001b[32;1m\u001b[1;3mI need to follow the correct format for answering questions.\n",
|
||||
"Action: N/A\u001b[0m\n",
|
||||
"Observation: Invalid Format: Missing 'Action Input:' after 'Action:'\n",
|
||||
"Thought:\u001b[32;1m\u001b[1;3mI need to follow the correct format for answering questions.\n",
|
||||
"Action: N/A\u001b[0m\n",
|
||||
"Observation: Invalid Format: Missing 'Action Input:' after 'Action:'\n",
|
||||
"Thought:\u001b[32;1m\u001b[1;3mI need to follow the correct format for answering questions.\n",
|
||||
"Action: N/A\u001b[0m\n",
|
||||
"Observation: Invalid Format: Missing 'Action Input:' after 'Action:'\n",
|
||||
"Thought:\u001b[32;1m\u001b[1;3mI need to follow the correct format for answering questions.\n",
|
||||
"Action: N/A\u001b[0m\n",
|
||||
"Observation: Invalid Format: Missing 'Action Input:' after 'Action:'\n",
|
||||
"Thought:\u001b[32;1m\u001b[1;3mI need to follow the correct format for answering questions.\n",
|
||||
"Action: N/A\u001b[0m\n",
|
||||
"Observation: Invalid Format: Missing 'Action Input:' after 'Action:'\n",
|
||||
"Thought:\u001b[32;1m\u001b[1;3mI need to follow the correct format for answering questions.\n",
|
||||
"Action: N/A\u001b[0m\n",
|
||||
"Observation: Invalid Format: Missing 'Action Input:' after 'Action:'\n",
|
||||
"Thought:\u001b[32;1m\u001b[1;3mI need to follow the correct format for answering questions.\n",
|
||||
"Action: N/A\u001b[0m\n",
|
||||
"Observation: Invalid Format: Missing 'Action Input:' after 'Action:'\n",
|
||||
"Thought:\u001b[32;1m\u001b[1;3mI need to follow the correct format for answering questions.\n",
|
||||
"Action: N/A\u001b[0m\n",
|
||||
"Observation: Invalid Format: Missing 'Action Input:' after 'Action:'\n",
|
||||
"Thought:\u001b[32;1m\u001b[1;3mI need to follow the correct format for answering questions.\n",
|
||||
"Action: N/A\u001b[0m\n",
|
||||
"Observation: Invalid Format: Missing 'Action Input:' after 'Action:'\n",
|
||||
"Thought:\u001b[32;1m\u001b[1;3mI need to follow the correct format for answering questions.\n",
|
||||
"Action: N/A\u001b[0m\n",
|
||||
"Observation: Invalid Format: Missing 'Action Input:' after 'Action:'\n",
|
||||
"Thought:\u001b[32;1m\u001b[1;3mI need to follow the correct format for answering questions.\n",
|
||||
"Action: N/A\u001b[0m\n",
|
||||
"Observation: Invalid Format: Missing 'Action Input:' after 'Action:'\n",
|
||||
"Thought:\u001b[32;1m\u001b[1;3mI need to follow the correct format for answering questions.\n",
|
||||
"Action: N/A\u001b[0m\n",
|
||||
"Observation: Invalid Format: Missing 'Action Input:' after 'Action:'\n",
|
||||
"Thought:\u001b[32;1m\u001b[1;3mI need to follow the correct format for answering questions.\n",
|
||||
"Action: N/A\u001b[0m\n",
|
||||
"Observation: Invalid Format: Missing 'Action Input:' after 'Action:'\n",
|
||||
"Thought:\u001b[32;1m\u001b[1;3m\u001b[0m\n",
|
||||
"\n",
|
||||
"\u001b[1m> Finished chain.\u001b[0m\n",
|
||||
"Agent stopped due to iteration limit or time limit.\n",
|
||||
"Chunk: Agent stopped due to iteration limit or time limit.\n",
|
||||
"Response sent.\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"await on_message(Message(content=\"Hi AI, how are you today?\"))"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 24,
|
||||
"id": "b850294c-7f8f-4e79-adcf-47e4e3a898df",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langsmith import Client\n",
|
||||
"\n",
|
||||
"client = Client()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 25,
|
||||
"id": "6d089ddc-69bc-45a8-b8db-9962e4f1f5ee",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from itertools import islice\n",
|
||||
"\n",
|
||||
"runs = list(islice(client.list_runs(), 10))"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 38,
|
||||
"id": "f0349fac-5a98-400f-ba03-61ed4e1332be",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"runs = sorted(runs, key=lambda x: x.start_time, reverse=True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 26,
|
||||
"id": "02f133f0-39ee-4b46-b443-12c1f9b76fff",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"ids = [run.id for run in runs]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 39,
|
||||
"id": "3366dce4-0c38-4a7d-8111-046a58b24917",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"runs2 = list(client.list_runs(id=ids))\n",
|
||||
"runs2 = sorted(runs2, key=lambda x: x.start_time, reverse=True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 42,
|
||||
"id": "82915b90-39a0-47d6-9121-56a13f210f52",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"['a36092d2-4ad5-4fb4-9b0d-0dba9a2ed836',\n",
|
||||
" '9398e6be-964f-4aa4-8de9-ad78cd4b7074']"
|
||||
]
|
||||
},
|
||||
"execution_count": 42,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"[str(x) for x in ids[:2]]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 48,
|
||||
"id": "f610ec91-dc48-4a17-91c5-5c4675c77abc",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langsmith.run_helpers import traceable\n",
|
||||
"\n",
|
||||
"@traceable(run_type=\"llm\", name=\"\"\"<iframe width=\"560\" height=\"315\" src=\"https://www.youtube.com/embed/dQw4w9WgXcQ?start=5\" title=\"YouTube video player\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" allowfullscreen></iframe>\"\"\")\n",
|
||||
"def foo():\n",
|
||||
" return \"bar\""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 49,
|
||||
"id": "bd317bd7-8b2a-433a-8ec3-098a84ba8e64",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"'bar'"
|
||||
]
|
||||
},
|
||||
"execution_count": 49,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"foo()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 52,
|
||||
"id": "b142519b-6885-415c-83b9-4a346fb90589",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.llms import AzureOpenAI"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "5c50bb2b-72b8-4322-9b16-d857ecd9f347",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.11.2"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
@ -5,16 +5,13 @@
|
||||
"id": "4cf569a7-9a1d-4489-934e-50e57760c907",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Evaluating Custom Criteria\n",
|
||||
"# Criteria Evaluation\n",
|
||||
"\n",
|
||||
"Suppose you want to test a model's output against a custom rubric or custom set of criteria, how would you go about testing this?\n",
|
||||
"In scenarios where you wish to assess a model's output using a specific rubric or criteria set, the `criteria` evaluator proves to be a handy tool. It allows you to verify if an LLM or Chain's output complies with a defined set of criteria.\n",
|
||||
"\n",
|
||||
"The `criteria` evaluator is a convenient way to predict whether an LLM or Chain's output complies with a set of criteria, so long as you can\n",
|
||||
"properly define those criteria.\n",
|
||||
"To understand its functionality and configurability in depth, refer to the reference documentation of the [CriteriaEvalChain](https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.criteria.eval_chain.CriteriaEvalChain.html#langchain.evaluation.criteria.eval_chain.CriteriaEvalChain) class.\n",
|
||||
"\n",
|
||||
"For more details, check out the reference docs for the [CriteriaEvalChain](https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.criteria.eval_chain.CriteriaEvalChain.html#langchain.evaluation.criteria.eval_chain.CriteriaEvalChain)'s class definition.\n",
|
||||
"\n",
|
||||
"### Without References\n",
|
||||
"### Usage without references\n",
|
||||
"\n",
|
||||
"In this example, you will use the `CriteriaEvalChain` to check whether an output is concise. First, create the evaluation chain to predict whether outputs are \"concise\"."
|
||||
]
|
||||
@ -62,6 +59,24 @@
|
||||
"print(eval_result)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "35e61e4d-b776-4f6b-8c89-da5d3604134a",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Output Format\n",
|
||||
"\n",
|
||||
"All string evaluators expose an [evaluate_strings](https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.criteria.eval_chain.CriteriaEvalChain.html?highlight=evaluate_strings#langchain.evaluation.criteria.eval_chain.CriteriaEvalChain.evaluate_strings) (or async [aevaluate_strings](https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.criteria.eval_chain.CriteriaEvalChain.html?highlight=evaluate_strings#langchain.evaluation.criteria.eval_chain.CriteriaEvalChain.aevaluate_strings)) method, which accepts:\n",
|
||||
"\n",
|
||||
"- input (str) – The input to the agent.\n",
|
||||
"- prediction (str) – The predicted response.\n",
|
||||
"\n",
|
||||
"The criteria evaluators return a dictionary with the following values:\n",
|
||||
"- score: Binary integeer 0 to 1, where 1 would mean that the output is compliant with the criteria, and 0 otherwise\n",
|
||||
"- value: A \"Y\" or \"N\" corresponding to the score\n",
|
||||
"- reasoning: String \"chain of thought reasoning\" from the LLM generated prior to creating the score"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "c40b1ac7-8f95-48ed-89a2-623bcc746461",
|
||||
@ -69,7 +84,7 @@
|
||||
"source": [
|
||||
"## Using Reference Labels\n",
|
||||
"\n",
|
||||
"Some criteria (such as correctness) require reference labels to work correctly. To do this, initialuse the `labeled_criteria` evaluator and call the evaluator with a `reference` string."
|
||||
"Some criteria (such as correctness) require reference labels to work correctly. To do this, initialize the `labeled_criteria` evaluator and call the evaluator with a `reference` string."
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -108,7 +123,7 @@
|
||||
"**Default Criteria**\n",
|
||||
"\n",
|
||||
"Most of the time, you'll want to define your own custom criteria (see below), but we also provide some common criteria you can load with a single string.\n",
|
||||
"Here's a list of pre-implemented criteria:"
|
||||
"Here's a list of pre-implemented criteria. Note that in the absence of labels, the LLM merely predicts what it thinks the best answer is and is not grounded in actual law or context."
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -154,20 +169,23 @@
|
||||
"\n",
|
||||
"To evaluate outputs against your own custom criteria, or to be more explicit the definition of any of the default criteria, pass in a dictionary of `\"criterion_name\": \"criterion_description\"`\n",
|
||||
"\n",
|
||||
"Note: the evaluator still predicts whether the output complies with ALL of the criteria provided. If you specify antagonistic criteria / antonyms, the evaluator won't be very useful."
|
||||
"Note: it's recommended that you create a single evaluator per criterion. This way, separate feedback can be provided for each aspect. Additionally, if you provide antagonistic criteria, the evaluator won't be very useful, as it will be configured to predict compliance for ALL of the criteria provided."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 8,
|
||||
"execution_count": 19,
|
||||
"id": "bafa0a11-2617-4663-84bf-24df7d0736be",
|
||||
"metadata": {},
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"{'reasoning': 'The criterion asks if the output contains numeric or mathematical information. \\n\\nThe submission is a joke that says, \"I ate some square pie but I don\\'t know the square of pi.\" \\n\\nIn this joke, there is a reference to the mathematical term \"square\" and the mathematical constant \"pi\". \\n\\nTherefore, the submission does contain numeric or mathematical information, and it meets the criterion. \\n\\nY', 'value': 'Y', 'score': 1}\n"
|
||||
"{'reasoning': \"The criterion asks if the output contains numeric or mathematical information. The joke in the submission does contain mathematical information. It refers to the mathematical concept of squaring a number and also mentions 'pi', which is a mathematical constant. Therefore, the submission does meet the criterion.\\n\\nY\", 'value': 'Y', 'score': 1}\n",
|
||||
"{'reasoning': 'Let\\'s assess the submission based on the given criteria:\\n\\n1. Numeric: The output does not contain any explicit numeric information. The word \"square\" and \"pi\" are mathematical terms but they are not numeric information per se.\\n\\n2. Mathematical: The output does contain mathematical information. The terms \"square\" and \"pi\" are mathematical terms. The joke is a play on the mathematical concept of squaring a number (in this case, pi).\\n\\n3. Grammatical: The output is grammatically correct. The sentence structure, punctuation, and word usage are all correct.\\n\\n4. Logical: The output is logical. It makes sense within the context of the joke. The joke is a play on words between the mathematical concept of squaring a number (pi) and eating a square pie.\\n\\nBased on the above analysis, the submission does not meet all the criteria because it does not contain numeric information.\\nN', 'value': 'N', 'score': 0}\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
@ -181,6 +199,22 @@
|
||||
"query = \"Tell me a joke\"\n",
|
||||
"prediction = \"I ate some square pie but I don't know the square of pi.\"\n",
|
||||
"eval_result = eval_chain.evaluate_strings(prediction=prediction, input=query)\n",
|
||||
"print(eval_result)\n",
|
||||
"\n",
|
||||
"# If you wanted to specify multiple criteria. Generally not recommended\n",
|
||||
"custom_criteria = {\n",
|
||||
" \"numeric\": \"Does the output contain numeric information?\",\n",
|
||||
" \"mathematical\": \"Does the output contain mathematical information?\",\n",
|
||||
" \"grammatical\": \"Is the output grammatically correct?\",\n",
|
||||
" \"logical\": \"Is the output logical?\",\n",
|
||||
"}\n",
|
||||
"\n",
|
||||
"eval_chain = load_evaluator(\n",
|
||||
" EvaluatorType.CRITERIA,\n",
|
||||
" criteria=custom_criteria,\n",
|
||||
")\n",
|
||||
"eval_result = eval_chain.evaluate_strings(prediction=prediction, input=query)\n",
|
||||
"print(\"Multi-criteria evaluation\")\n",
|
||||
"print(eval_result)"
|
||||
]
|
||||
},
|
||||
@ -402,6 +436,12 @@
|
||||
"\n",
|
||||
"Remember when selecting criteria to decide whether they ought to require ground truth labels or not. Things like \"correctness\" are best evaluated with ground truth or with extensive context. Also, remember to pick aligned principles for a given chain so that the classification makes sense."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "a684e2f1",
|
||||
"metadata": {},
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
|
@ -1,227 +0,0 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "d63696a8-d035-4cf7-9605-c3210f0b551d",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"source": [
|
||||
"# QA Correctness\n",
|
||||
"\n",
|
||||
"When thinking about a QA system, one of the most important questions to ask is whether the final generated result is correct. The `\"qa\"` evaluator compares a question-answering model's response to a reference answer to provide this level of information. If you are able to annotate a test dataset, this evaluator will be useful.\n",
|
||||
"\n",
|
||||
"For more details, check out the reference docs for the [QAEvalChain](https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.qa.eval_chain.QAEvalChain.html#langchain.evaluation.qa.eval_chain.QAEvalChain)'s class definition."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"id": "9672fdb9-b53f-41e4-8f72-f21d11edbeac",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.chat_models import ChatOpenAI\n",
|
||||
"from langchain.evaluation import load_evaluator\n",
|
||||
"\n",
|
||||
"llm = ChatOpenAI(model=\"gpt-4\", temperature=0)\n",
|
||||
"\n",
|
||||
"# Note: the eval_llm is optional. A gpt-4 model will be provided by default if not specified\n",
|
||||
"evaluator = load_evaluator(\"qa\", eval_llm=llm)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"id": "b4db474a-9c9d-473f-81b1-55070ee584a6",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"{'reasoning': None, 'value': 'CORRECT', 'score': 1}"
|
||||
]
|
||||
},
|
||||
"execution_count": 2,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"evaluator.evaluate_strings(\n",
|
||||
" input=\"What's last quarter's sales numbers?\",\n",
|
||||
" prediction=\"Last quarter we sold 600,000 total units of product.\",\n",
|
||||
" reference=\"Last quarter we sold 100,000 units of product A, 210,000 units of product B, and 300,000 units of product C.\",\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "a5b345aa-7f45-4eea-bedf-9b0d5e824be3",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## SQL Correctness\n",
|
||||
"\n",
|
||||
"You can use an LLM to check the equivalence of a SQL query against a reference SQL query using the sql prompt."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"id": "6c803b8c-fe1f-4fb7-8ea0-d9c67b855eb3",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.evaluation.qa.eval_prompt import SQL_PROMPT\n",
|
||||
"\n",
|
||||
"eval_chain = load_evaluator(\"qa\", eval_llm=llm, prompt=SQL_PROMPT)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"id": "e28b8d07-248f-405c-bcef-e0ebe3a05c3e",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"{'reasoning': 'The expert answer and the submission are very similar in their structure and logic. Both queries are trying to calculate the sum of sales amounts for the last quarter. They both use the SUM function to add up the sale_amount from the sales table. They also both use the same WHERE clause to filter the sales data to only include sales from the last quarter. The WHERE clause uses the DATEADD function to subtract 1 quarter from the current date (GETDATE()) and only includes sales where the sale_date is greater than or equal to this date and less than the current date.\\n\\nThe main difference between the two queries is that the expert answer uses a subquery to first select the sale_amount from the sales table with the appropriate date filter, and then sums these amounts in the outer query. The submission, on the other hand, does not use a subquery and instead sums the sale_amount directly in the main query with the same date filter.\\n\\nHowever, this difference does not affect the result of the query. Both queries will return the same result, which is the sum of the sales amounts for the last quarter.\\n\\nCORRECT',\n",
|
||||
" 'value': 'CORRECT',\n",
|
||||
" 'score': 1}"
|
||||
]
|
||||
},
|
||||
"execution_count": 4,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"eval_chain.evaluate_strings(\n",
|
||||
" input=\"What's last quarter's sales numbers?\",\n",
|
||||
" prediction=\"\"\"SELECT SUM(sale_amount) AS last_quarter_sales\n",
|
||||
"FROM sales\n",
|
||||
"WHERE sale_date >= DATEADD(quarter, -1, GETDATE()) AND sale_date < GETDATE();\n",
|
||||
"\"\"\",\n",
|
||||
" reference=\"\"\"SELECT SUM(sub.sale_amount) AS last_quarter_sales\n",
|
||||
"FROM (\n",
|
||||
" SELECT sale_amount\n",
|
||||
" FROM sales\n",
|
||||
" WHERE sale_date >= DATEADD(quarter, -1, GETDATE()) AND sale_date < GETDATE()\n",
|
||||
") AS sub;\n",
|
||||
"\"\"\",\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "e0c3dcad-408e-4d26-9e25-848ebacac2c4",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Using Context\n",
|
||||
"\n",
|
||||
"Sometimes, reference labels aren't all available, but you have additional knowledge as context from a retrieval system. Often there may be additional information that isn't available to the model you want to evaluate. For this type of scenario, you can use the [ContextQAEvalChain](https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.qa.eval_chain.ContextQAEvalChain.html#langchain.evaluation.qa.eval_chain.ContextQAEvalChain)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"id": "9f3ae116-3a2f-461d-ba6f-7352b42c1b0c",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"{'reasoning': None, 'value': 'CORRECT', 'score': 1}"
|
||||
]
|
||||
},
|
||||
"execution_count": 5,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"eval_chain = load_evaluator(\"context_qa\", eval_llm=llm)\n",
|
||||
"\n",
|
||||
"eval_chain.evaluate_strings(\n",
|
||||
" input=\"Who won the NFC championship game in 2023?\",\n",
|
||||
" prediction=\"Eagles\",\n",
|
||||
" reference=\"NFC Championship Game 2023: Philadelphia Eagles 31, San Francisco 49ers 7\",\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "ba5eac17-08b6-4e4f-a896-79e7fc637018",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## CoT With Context\n",
|
||||
"\n",
|
||||
"The same prompt strategies such as chain of thought can be used to make the evaluation results more reliable.\n",
|
||||
"The [CotQAEvalChain's](https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.qa.eval_chain.CotQAEvalChain.html#langchain.evaluation.qa.eval_chain.CotQAEvalChain) default prompt instructs the model to do this."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"id": "26e3b686-98f4-45a5-9854-7071ec2893f1",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"{'reasoning': 'The student\\'s answer is \"Eagles\". The context states that the Philadelphia Eagles won the NFC championship game in 2023. Therefore, the student\\'s answer matches the information provided in the context.',\n",
|
||||
" 'value': 'GRADE: CORRECT',\n",
|
||||
" 'score': 1}"
|
||||
]
|
||||
},
|
||||
"execution_count": 6,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"eval_chain = load_evaluator(\"cot_qa\", eval_llm=llm)\n",
|
||||
"\n",
|
||||
"eval_chain.evaluate_strings(\n",
|
||||
" input=\"Who won the NFC championship game in 2023?\",\n",
|
||||
" prediction=\"Eagles\",\n",
|
||||
" reference=\"NFC Championship Game 2023: Philadelphia Eagles 31, San Francisco 49ers 7\",\n",
|
||||
")"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.11.2"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
@ -67,7 +67,7 @@
|
||||
"id": "297dea4b-fb28-4292-b6e0-1c769cfb9cbd",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"The example above will return a score of 1 if the language model predicts that any of the actions were unnecessary, and it returns a score of 0 if all of them were predicted to be necessary.\n",
|
||||
"The example above will return a score of 1 if the language model predicts that any of the actions were unnecessary, and it returns a score of 0 if all of them were predicted to be necessary. It returns the string 'decision' as the 'value', and includes the rest of the generated text as 'reasoning' to let you audit the decision.\n",
|
||||
"\n",
|
||||
"You can call this evaluator to grade the intermediate steps of your agent's trajectory."
|
||||
]
|
||||
|
@ -30,6 +30,25 @@
|
||||
"evaluator = load_evaluator(\"trajectory\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "b1c64c1a",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Methods\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"The Agent Trajectory Evaluators are used with the [evaluate_agent_trajectory](https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.agents.trajectory_eval_chain.TrajectoryEvalChain.html#langchain.evaluation.agents.trajectory_eval_chain.TrajectoryEvalChain.evaluate_agent_trajectory) (and async [aevaluate_agent_trajectory](https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.agents.trajectory_eval_chain.TrajectoryEvalChain.html#langchain.evaluation.agents.trajectory_eval_chain.TrajectoryEvalChain.aevaluate_agent_trajectory)) methods, which accept:\n",
|
||||
"\n",
|
||||
"- input (str) – The input to the agent.\n",
|
||||
"- prediction (str) – The final predicted response.\n",
|
||||
"- agent_trajectory (List[Tuple[AgentAction, str]]) – The intermediate steps forming the agent trajectory\n",
|
||||
"\n",
|
||||
"They return a dictionary with the following values:\n",
|
||||
"- score: Float from 0 to 1, where 1 would mean \"most effective\" and 0 would mean \"least effective\"\n",
|
||||
"- reasoning: String \"chain of thought reasoning\" from the LLM generated prior to creating the score"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "e733562c-4c17-4942-9647-acfc5ebfaca2",
|
||||
@ -52,11 +71,13 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import os\n",
|
||||
"import subprocess\n",
|
||||
"\n",
|
||||
"from langchain.chat_models import ChatOpenAI\n",
|
||||
"from langchain.tools import tool\n",
|
||||
"from langchain.agents import AgentType, initialize_agent\n",
|
||||
"\n",
|
||||
"from pydantic import HttpUrl\n",
|
||||
"import subprocess\n",
|
||||
"from urllib.parse import urlparse\n",
|
||||
"\n",
|
||||
"\n",
|
||||
@ -117,17 +138,11 @@
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stderr",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Type <class 'langchain.agents.openai_functions_multi_agent.base._FunctionsAgentAction'> not serializable\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"1.0"
|
||||
"{'score': 1.0,\n",
|
||||
" 'reasoning': \"i. The final answer is helpful. It directly answers the user's question about the latency for the website https://langchain.com.\\n\\nii. The AI language model uses a logical sequence of tools to answer the question. It uses the 'ping' tool to measure the latency of the website, which is the correct tool for this task.\\n\\niii. The AI language model uses the tool in a helpful way. It inputs the URL into the 'ping' tool and correctly interprets the output to provide the latency in milliseconds.\\n\\niv. The AI language model does not use too many steps to answer the question. It only uses one step, which is appropriate for this type of question.\\n\\nv. The appropriate tool is used to answer the question. The 'ping' tool is the correct tool to measure website latency.\\n\\nGiven these considerations, the AI language model's performance is excellent. It uses the correct tool, interprets the output correctly, and provides a helpful and direct answer to the user's question.\"}"
|
||||
]
|
||||
},
|
||||
"execution_count": 3,
|
||||
@ -141,7 +156,7 @@
|
||||
" input=result[\"input\"],\n",
|
||||
" agent_trajectory=result[\"intermediate_steps\"],\n",
|
||||
")\n",
|
||||
"evaluation_result[\"score\"]"
|
||||
"evaluation_result"
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -193,7 +208,8 @@
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"1.0"
|
||||
"{'score': 1.0,\n",
|
||||
" 'reasoning': \"Here is my detailed evaluation of the AI's response:\\n\\ni. The final answer is helpful, as it directly provides the latency measurement for the requested website.\\n\\nii. The sequence of using the ping tool to measure latency is logical for this question.\\n\\niii. The ping tool is used in a helpful way, with the website URL provided as input and the output latency measurement extracted.\\n\\niv. Only one step is used, which is appropriate for simply measuring latency. More steps are not needed.\\n\\nv. The ping tool is an appropriate choice to measure latency. \\n\\nIn summary, the AI uses an optimal single step approach with the right tool and extracts the needed output. The final answer directly answers the question in a helpful way.\\n\\nOverall\"}"
|
||||
]
|
||||
},
|
||||
"execution_count": 6,
|
||||
@ -207,7 +223,7 @@
|
||||
" input=result[\"input\"],\n",
|
||||
" agent_trajectory=result[\"intermediate_steps\"],\n",
|
||||
")\n",
|
||||
"evaluation_result[\"score\"]"
|
||||
"evaluation_result"
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -245,7 +261,8 @@
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"1.0"
|
||||
"{'score': 1.0,\n",
|
||||
" 'reasoning': \"i. The final answer is helpful. It directly answers the user's question about the latency for the specified website.\\n\\nii. The AI language model uses a logical sequence of tools to answer the question. In this case, only one tool was needed to answer the question, and the model chose the correct one.\\n\\niii. The AI language model uses the tool in a helpful way. The 'ping' tool was used to determine the latency of the website, which was the information the user was seeking.\\n\\niv. The AI language model does not use too many steps to answer the question. Only one step was needed and used.\\n\\nv. The appropriate tool was used to answer the question. The 'ping' tool is designed to measure latency, which was the information the user was seeking.\\n\\nGiven these considerations, the AI language model's performance in answering this question is excellent.\"}"
|
||||
]
|
||||
},
|
||||
"execution_count": 8,
|
||||
@ -259,7 +276,7 @@
|
||||
" input=result[\"input\"],\n",
|
||||
" agent_trajectory=result[\"intermediate_steps\"],\n",
|
||||
")\n",
|
||||
"evaluation_result[\"score\"]"
|
||||
"evaluation_result"
|
||||
]
|
||||
}
|
||||
],
|
||||
|
@ -122,7 +122,7 @@ llm_chain.predict()
|
||||
|
||||
</CodeOutputBlock>
|
||||
|
||||
With `predict_and_parser`:
|
||||
With `predict_and_parse`:
|
||||
|
||||
|
||||
```python
|
||||
|
@ -5,7 +5,17 @@ the sequence of actions taken and their outcomes. It uses a language model
|
||||
chain (LLMChain) to generate the reasoning and scores.
|
||||
"""
|
||||
|
||||
from typing import Any, Dict, List, NamedTuple, Optional, Sequence, Tuple, Union
|
||||
from typing import (
|
||||
Any,
|
||||
Dict,
|
||||
List,
|
||||
Optional,
|
||||
Sequence,
|
||||
Tuple,
|
||||
TypedDict,
|
||||
Union,
|
||||
cast,
|
||||
)
|
||||
|
||||
from pydantic import Extra, Field
|
||||
|
||||
@ -26,7 +36,7 @@ from langchain.schema.language_model import BaseLanguageModel
|
||||
from langchain.tools.base import BaseTool
|
||||
|
||||
|
||||
class TrajectoryEval(NamedTuple):
|
||||
class TrajectoryEval(TypedDict):
|
||||
"""A named tuple containing the score and reasoning for a trajectory."""
|
||||
|
||||
score: float
|
||||
@ -129,8 +139,8 @@ class TrajectoryEvalChain(AgentTrajectoryEvaluator, LLMEvalChain):
|
||||
default_factory=TrajectoryOutputParser
|
||||
)
|
||||
"""The output parser used to parse the output."""
|
||||
return_reasoning: bool = False
|
||||
"""Whether to return the reasoning along with the score."""
|
||||
return_reasoning: bool = False # :meta private:
|
||||
"""DEPRECATED. Reasoning always returned."""
|
||||
|
||||
class Config:
|
||||
"""Configuration for the QAEvalChain."""
|
||||
@ -210,7 +220,6 @@ The following is the expected answer. Use this to measure correctness:
|
||||
llm: BaseLanguageModel,
|
||||
agent_tools: Optional[Sequence[BaseTool]] = None,
|
||||
output_parser: Optional[TrajectoryOutputParser] = None,
|
||||
return_reasoning: bool = True,
|
||||
**kwargs: Any,
|
||||
) -> "TrajectoryEvalChain":
|
||||
"""Create a TrajectoryEvalChain object from a language model chain.
|
||||
@ -221,9 +230,6 @@ The following is the expected answer. Use this to measure correctness:
|
||||
available to the agent.
|
||||
output_parser (Optional[TrajectoryOutputParser]): The output parser
|
||||
used to parse the chain output into a score.
|
||||
return_reasoning (bool): Whether to return the
|
||||
reasoning along with the score.
|
||||
|
||||
Returns:
|
||||
TrajectoryEvalChain: The TrajectoryEvalChain object.
|
||||
"""
|
||||
@ -238,7 +244,6 @@ The following is the expected answer. Use this to measure correctness:
|
||||
eval_chain = LLMChain(llm=llm, prompt=prompt)
|
||||
return cls(
|
||||
agent_tools=agent_tools,
|
||||
return_reasoning=return_reasoning,
|
||||
eval_chain=eval_chain,
|
||||
output_parser=output_parser or TrajectoryOutputParser(),
|
||||
**kwargs,
|
||||
@ -260,9 +265,7 @@ The following is the expected answer. Use this to measure correctness:
|
||||
Returns:
|
||||
List[str]: The output keys.
|
||||
"""
|
||||
if self.return_reasoning:
|
||||
return ["score", "reasoning"]
|
||||
return ["score"]
|
||||
|
||||
def prep_inputs(self, inputs: Union[Dict[str, Any], Any]) -> Dict[str, str]:
|
||||
"""Validate and prep inputs."""
|
||||
@ -292,12 +295,7 @@ The following is the expected answer. Use this to measure correctness:
|
||||
raw_output = self.eval_chain.run(
|
||||
chain_input, callbacks=_run_manager.get_child()
|
||||
)
|
||||
parsed_output = self.output_parser.parse(raw_output)
|
||||
|
||||
if self.return_reasoning:
|
||||
return {"score": parsed_output.score, "reasoning": parsed_output.reasoning}
|
||||
|
||||
return {"score": parsed_output.score}
|
||||
return cast(dict, self.output_parser.parse(raw_output))
|
||||
|
||||
async def _acall(
|
||||
self,
|
||||
@ -321,12 +319,7 @@ The following is the expected answer. Use this to measure correctness:
|
||||
raw_output = await self.eval_chain.arun(
|
||||
chain_input, callbacks=_run_manager.get_child()
|
||||
)
|
||||
parsed_output = self.output_parser.parse(raw_output)
|
||||
|
||||
if self.return_reasoning:
|
||||
return {"score": parsed_output.score, "reasoning": parsed_output.reasoning}
|
||||
|
||||
return {"score": parsed_output.score}
|
||||
return cast(dict, self.output_parser.parse(raw_output))
|
||||
|
||||
def _evaluate_agent_trajectory(
|
||||
self,
|
||||
|
@ -1,18 +1,86 @@
|
||||
"""Base classes for comparing the output of two models."""
|
||||
from __future__ import annotations
|
||||
|
||||
from typing import Any, Dict, List, Optional
|
||||
from typing import Any, Dict, List, Optional, Union
|
||||
|
||||
from pydantic import Extra, Field
|
||||
|
||||
from langchain.callbacks.manager import Callbacks
|
||||
from langchain.chains.constitutional_ai.models import ConstitutionalPrinciple
|
||||
from langchain.chains.llm import LLMChain
|
||||
from langchain.evaluation.comparison.prompt import PROMPT, PROMPT_WITH_REFERENCE
|
||||
from langchain.evaluation.criteria.eval_chain import (
|
||||
CRITERIA_TYPE,
|
||||
Criteria,
|
||||
)
|
||||
from langchain.evaluation.schema import LLMEvalChain, PairwiseStringEvaluator
|
||||
from langchain.prompts.prompt import PromptTemplate
|
||||
from langchain.schema import RUN_KEY, BaseOutputParser
|
||||
from langchain.schema.language_model import BaseLanguageModel
|
||||
|
||||
_SUPPORTED_CRITERIA = {
|
||||
Criteria.CONCISENESS: "Is the submission concise and to the point?",
|
||||
Criteria.RELEVANCE: "Is the submission referring to a real quote from the text?",
|
||||
Criteria.CORRECTNESS: "Is the submission correct, accurate, and factual?",
|
||||
Criteria.COHERENCE: "Is the submission coherent, well-structured, and organized?",
|
||||
Criteria.HARMFULNESS: "Is the submission harmful, offensive, or inappropriate?",
|
||||
Criteria.MALICIOUSNESS: "Is the submission malicious in any way?",
|
||||
Criteria.HELPFULNESS: "Is the submission helpful, insightful, and appropriate?",
|
||||
Criteria.CONTROVERSIALITY: "Is the submission controversial or debatable?",
|
||||
Criteria.MISOGYNY: "Is the submission misogynistic? If so, response Y.",
|
||||
Criteria.CRIMINALITY: "Is the submission criminal in any way?",
|
||||
Criteria.INSENSITIVITY: "Is the submission insensitive to any group of people?",
|
||||
Criteria.DEPTH: "Does the submission demonstrate depth of thought?",
|
||||
Criteria.CREATIVITY: "Does the submission demonstrate novelty or unique ideas?",
|
||||
Criteria.DETAIL: "Does the submission demonstrate attention to detail?",
|
||||
}
|
||||
|
||||
|
||||
def resolve_pairwise_criteria(
|
||||
criteria: Optional[Union[CRITERIA_TYPE, str, List[CRITERIA_TYPE]]]
|
||||
) -> dict:
|
||||
"""Resolve the criteria for the pairwise evaluator.
|
||||
|
||||
Args:
|
||||
criteria (Union[CRITERIA_TYPE, str], optional): The criteria to use.
|
||||
|
||||
Returns:
|
||||
dict: The resolved criteria.
|
||||
|
||||
"""
|
||||
if criteria is None:
|
||||
_default_criteria = [
|
||||
Criteria.HELPFULNESS,
|
||||
Criteria.RELEVANCE,
|
||||
Criteria.CORRECTNESS,
|
||||
Criteria.DEPTH,
|
||||
]
|
||||
return {k.value: _SUPPORTED_CRITERIA[k] for k in _default_criteria}
|
||||
elif isinstance(criteria, Criteria):
|
||||
criteria_ = {criteria.value: _SUPPORTED_CRITERIA[criteria]}
|
||||
elif isinstance(criteria, str):
|
||||
if criteria in _SUPPORTED_CRITERIA:
|
||||
criteria_ = {criteria: _SUPPORTED_CRITERIA[Criteria(criteria)]}
|
||||
else:
|
||||
criteria_ = {criteria: ""}
|
||||
elif isinstance(criteria, ConstitutionalPrinciple):
|
||||
criteria_ = {criteria.name: criteria.critique_request}
|
||||
elif isinstance(criteria, (list, tuple)):
|
||||
criteria_ = {
|
||||
k: v
|
||||
for criterion in criteria
|
||||
for k, v in resolve_pairwise_criteria(criterion).items()
|
||||
}
|
||||
else:
|
||||
if not criteria:
|
||||
raise ValueError(
|
||||
"Criteria cannot be empty. "
|
||||
"Please provide a criterion name or a mapping of the criterion name"
|
||||
" to its description."
|
||||
)
|
||||
criteria_ = dict(criteria)
|
||||
return criteria_
|
||||
|
||||
|
||||
class PairwiseStringResultOutputParser(BaseOutputParser[dict]):
|
||||
"""A parser for the output of the PairwiseStringEvalChain.
|
||||
@ -152,6 +220,7 @@ class PairwiseStringEvalChain(PairwiseStringEvaluator, LLMEvalChain, LLMChain):
|
||||
llm: BaseLanguageModel,
|
||||
*,
|
||||
prompt: Optional[PromptTemplate] = None,
|
||||
criteria: Optional[Union[CRITERIA_TYPE, str]] = None,
|
||||
**kwargs: Any,
|
||||
) -> PairwiseStringEvalChain:
|
||||
"""Initialize the PairwiseStringEvalChain from an LLM.
|
||||
@ -168,14 +237,16 @@ class PairwiseStringEvalChain(PairwiseStringEvaluator, LLMEvalChain, LLMChain):
|
||||
ValueError: If the input variables are not as expected.
|
||||
|
||||
"""
|
||||
expected_input_vars = {"prediction", "prediction_b", "input"}
|
||||
expected_input_vars = {"prediction", "prediction_b", "input", "criteria"}
|
||||
prompt_ = prompt or PROMPT
|
||||
if expected_input_vars != set(prompt_.input_variables):
|
||||
raise ValueError(
|
||||
f"Input variables should be {expected_input_vars}, "
|
||||
f"but got {prompt_.input_variables}"
|
||||
)
|
||||
return cls(llm=llm, prompt=prompt_, **kwargs)
|
||||
criteria_ = resolve_pairwise_criteria(criteria)
|
||||
criteria_str = "\n".join(f"{k}: {v}" if v else k for k, v in criteria_.items())
|
||||
return cls(llm=llm, prompt=prompt_.partial(criteria=criteria_str), **kwargs)
|
||||
|
||||
def _prepare_input(
|
||||
self,
|
||||
@ -323,6 +394,7 @@ class LabeledPairwiseStringEvalChain(PairwiseStringEvalChain):
|
||||
llm: BaseLanguageModel,
|
||||
*,
|
||||
prompt: Optional[PromptTemplate] = None,
|
||||
criteria: Optional[Union[CRITERIA_TYPE, str]] = None,
|
||||
**kwargs: Any,
|
||||
) -> PairwiseStringEvalChain:
|
||||
"""Initialize the LabeledPairwiseStringEvalChain from an LLM.
|
||||
@ -330,6 +402,7 @@ class LabeledPairwiseStringEvalChain(PairwiseStringEvalChain):
|
||||
Args:
|
||||
llm (BaseLanguageModel): The LLM to use.
|
||||
prompt (PromptTemplate, optional): The prompt to use.
|
||||
criteria (Union[CRITERIA_TYPE, str], optional): The criteria to use.
|
||||
**kwargs (Any): Additional keyword arguments.
|
||||
|
||||
Returns:
|
||||
@ -339,11 +412,19 @@ class LabeledPairwiseStringEvalChain(PairwiseStringEvalChain):
|
||||
ValueError: If the input variables are not as expected.
|
||||
|
||||
""" # noqa: E501
|
||||
expected_input_vars = {"prediction", "prediction_b", "input", "reference"}
|
||||
expected_input_vars = {
|
||||
"prediction",
|
||||
"prediction_b",
|
||||
"input",
|
||||
"reference",
|
||||
"criteria",
|
||||
}
|
||||
prompt_ = prompt or PROMPT_WITH_REFERENCE
|
||||
if expected_input_vars != set(prompt_.input_variables):
|
||||
raise ValueError(
|
||||
f"Input variables should be {expected_input_vars}, "
|
||||
f"but got {prompt_.input_variables}"
|
||||
)
|
||||
return cls(llm=llm, prompt=prompt_, **kwargs)
|
||||
criteria_ = resolve_pairwise_criteria(criteria)
|
||||
criteria_str = "\n".join(f"{k}: {v}" for k, v in criteria_.items())
|
||||
return cls(llm=llm, prompt=prompt_.partial(criteria=criteria_str), **kwargs)
|
||||
|
@ -9,7 +9,8 @@ from langchain.prompts import PromptTemplate
|
||||
|
||||
template = """Act as a fair judge and rate the two responses to the question below.\
|
||||
Choose the response that best followed the instructions and answered the question.\
|
||||
Your assessment should weigh helpfulness, relevance, accuracy, depth, creativity, and detail.\
|
||||
Your assessment should weigh the following criteria:
|
||||
{criteria}\
|
||||
Start by comparing both responses and give a brief rationale.\
|
||||
Avoid bias from the order of presentation or response length.
|
||||
After giving your rationale, make your final decision using this format:\
|
||||
@ -28,12 +29,14 @@ After giving your rationale, make your final decision using this format:\
|
||||
{prediction_b}
|
||||
[/RESPONSE B]"""
|
||||
PROMPT = PromptTemplate(
|
||||
input_variables=["input", "prediction", "prediction_b"], template=template
|
||||
input_variables=["input", "prediction", "prediction_b", "criteria"],
|
||||
template=template,
|
||||
)
|
||||
|
||||
template = """Act as a fair judge and rate the two responses to the question below.\
|
||||
Choose the response that best followed the instructions and answered the question.\
|
||||
Your assessment should weigh helpfulness, relevance, accuracy, depth, creativity, and detail.\
|
||||
Your assessment should weigh the following criteria:
|
||||
{criteria}\
|
||||
Start by comparing both responses and give a brief rationale.\
|
||||
Avoid bias from the order of presentation or response length.\
|
||||
Weigh accuracy based on the following ground truth reference\
|
||||
@ -60,6 +63,6 @@ After giving your rationale, make your final decision using this format:\
|
||||
[/RESPONSE B]"""
|
||||
|
||||
PROMPT_WITH_REFERENCE = PromptTemplate(
|
||||
input_variables=["input", "prediction", "prediction_b", "reference"],
|
||||
input_variables=["input", "prediction", "prediction_b", "reference", "criteria"],
|
||||
template=template,
|
||||
)
|
||||
|
@ -2,12 +2,12 @@
|
||||
|
||||
These evaluators are useful for evaluating the
|
||||
output of a language model or chain against
|
||||
custom criteria or rubric.
|
||||
specified criteria or rubric.
|
||||
|
||||
Classes
|
||||
-------
|
||||
CriteriaEvalChain : Evaluates the output of a language model or
|
||||
chain against custom criteria.
|
||||
chain against specified criteria.
|
||||
|
||||
Examples
|
||||
--------
|
||||
|
@ -28,6 +28,9 @@ class Criteria(str, Enum):
|
||||
MISOGYNY = "misogyny"
|
||||
CRIMINALITY = "criminality"
|
||||
INSENSITIVITY = "insensitivity"
|
||||
DEPTH = "depth"
|
||||
CREATIVITY = "creativity"
|
||||
DETAIL = "detail"
|
||||
|
||||
|
||||
_SUPPORTED_CRITERIA = {
|
||||
@ -49,6 +52,9 @@ _SUPPORTED_CRITERIA = {
|
||||
" If so, response Y. If not, respond N.",
|
||||
Criteria.INSENSITIVITY: "Is the submission insensitive to any group of people?"
|
||||
" If so, response Y. If not, respond N.",
|
||||
Criteria.DEPTH: "Does the submission demonstrate depth of thought?",
|
||||
Criteria.CREATIVITY: "Does the submission demonstrate novelty or unique ideas?",
|
||||
Criteria.DETAIL: "Does the submission demonstrate attention to detail?",
|
||||
}
|
||||
|
||||
|
||||
@ -89,6 +95,51 @@ CRITERIA_TYPE = Union[
|
||||
]
|
||||
|
||||
|
||||
def resolve_criteria(
|
||||
criteria: Optional[Union[CRITERIA_TYPE, str]],
|
||||
) -> Dict[str, str]:
|
||||
"""Resolve the criteria to evaluate.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
criteria : CRITERIA_TYPE
|
||||
The criteria to evaluate the runs against. It can be:
|
||||
- a mapping of a criterion name to its description
|
||||
- a single criterion name present in one of the default criteria
|
||||
- a single `ConstitutionalPrinciple` instance
|
||||
|
||||
Returns
|
||||
-------
|
||||
Dict[str, str]
|
||||
A dictionary mapping criterion names to descriptions.
|
||||
|
||||
Examples
|
||||
--------
|
||||
>>> criterion = "relevance"
|
||||
>>> CriteriaEvalChain.resolve_criteria(criteria)
|
||||
{'relevance': 'Is the submission referring to a real quote from the text?'}
|
||||
""" # noqa: E501
|
||||
if criteria is None:
|
||||
return {
|
||||
"helpfulness": _SUPPORTED_CRITERIA[Criteria.HELPFULNESS],
|
||||
}
|
||||
if isinstance(criteria, Criteria):
|
||||
criteria_ = {criteria.value: _SUPPORTED_CRITERIA[criteria]}
|
||||
elif isinstance(criteria, str):
|
||||
criteria_ = {criteria: _SUPPORTED_CRITERIA[Criteria(criteria)]}
|
||||
elif isinstance(criteria, ConstitutionalPrinciple):
|
||||
criteria_ = {criteria.name: criteria.critique_request}
|
||||
else:
|
||||
if not criteria:
|
||||
raise ValueError(
|
||||
"Criteria cannot be empty. "
|
||||
"Please provide a criterion name or a mapping of the criterion name"
|
||||
" to its description."
|
||||
)
|
||||
criteria_ = dict(criteria)
|
||||
return criteria_
|
||||
|
||||
|
||||
class CriteriaEvalChain(StringEvaluator, LLMEvalChain, LLMChain):
|
||||
"""LLM Chain for evaluating runs against criteria.
|
||||
|
||||
@ -190,6 +241,19 @@ class CriteriaEvalChain(StringEvaluator, LLMEvalChain, LLMChain):
|
||||
"\nTo use references, use the labeled_criteria instead."
|
||||
)
|
||||
|
||||
@classmethod
|
||||
def _resolve_prompt(
|
||||
cls, prompt: Optional[BasePromptTemplate] = None
|
||||
) -> BasePromptTemplate:
|
||||
expected_input_vars = {"input", "output", "criteria"}
|
||||
prompt_ = prompt or PROMPT
|
||||
if expected_input_vars != set(prompt_.input_variables):
|
||||
raise ValueError(
|
||||
f"Input variables should be {expected_input_vars}, "
|
||||
f"but got {prompt_.input_variables}"
|
||||
)
|
||||
return prompt_
|
||||
|
||||
@classmethod
|
||||
def resolve_criteria(
|
||||
cls,
|
||||
@ -216,38 +280,7 @@ class CriteriaEvalChain(StringEvaluator, LLMEvalChain, LLMChain):
|
||||
>>> CriteriaEvalChain.resolve_criteria(criteria)
|
||||
{'relevance': 'Is the submission referring to a real quote from the text?'}
|
||||
""" # noqa: E501
|
||||
if criteria is None:
|
||||
return {
|
||||
"helpfulness": _SUPPORTED_CRITERIA[Criteria.HELPFULNESS],
|
||||
}
|
||||
if isinstance(criteria, Criteria):
|
||||
criteria_ = {criteria.value: _SUPPORTED_CRITERIA[criteria]}
|
||||
elif isinstance(criteria, str):
|
||||
criteria_ = {criteria: _SUPPORTED_CRITERIA[Criteria(criteria)]}
|
||||
elif isinstance(criteria, ConstitutionalPrinciple):
|
||||
criteria_ = {criteria.name: criteria.critique_request}
|
||||
else:
|
||||
if not criteria:
|
||||
raise ValueError(
|
||||
"Criteria cannot be empty. "
|
||||
"Please provide a criterion name or a mapping of the criterion name"
|
||||
" to its description."
|
||||
)
|
||||
criteria_ = dict(criteria)
|
||||
return criteria_
|
||||
|
||||
@classmethod
|
||||
def _resolve_prompt(
|
||||
cls, prompt: Optional[BasePromptTemplate] = None
|
||||
) -> BasePromptTemplate:
|
||||
expected_input_vars = {"input", "output", "criteria"}
|
||||
prompt_ = prompt or PROMPT
|
||||
if expected_input_vars != set(prompt_.input_variables):
|
||||
raise ValueError(
|
||||
f"Input variables should be {expected_input_vars}, "
|
||||
f"but got {prompt_.input_variables}"
|
||||
)
|
||||
return prompt_
|
||||
return resolve_criteria(criteria)
|
||||
|
||||
@classmethod
|
||||
def from_llm(
|
||||
@ -306,7 +339,7 @@ class CriteriaEvalChain(StringEvaluator, LLMEvalChain, LLMChain):
|
||||
" (LabeledCriteriaEvalChain) instead."
|
||||
)
|
||||
criteria_ = cls.resolve_criteria(criteria)
|
||||
criteria_str = " ".join(f"{k}: {v}" for k, v in criteria_.items())
|
||||
criteria_str = "\n".join(f"{k}: {v}" for k, v in criteria_.items())
|
||||
prompt_ = prompt_.partial(criteria=criteria_str)
|
||||
return cls(
|
||||
llm=llm,
|
||||
@ -519,7 +552,7 @@ class LabeledCriteriaEvalChain(CriteriaEvalChain):
|
||||
"""
|
||||
prompt = cls._resolve_prompt(prompt)
|
||||
criteria_ = cls.resolve_criteria(criteria)
|
||||
criteria_str = " ".join(f"{k}: {v}" for k, v in criteria_.items())
|
||||
criteria_str = "\n".join(f"{k}: {v}" for k, v in criteria_.items())
|
||||
prompt_ = prompt.partial(criteria=criteria_str)
|
||||
return cls(
|
||||
llm=llm,
|
||||
|
@ -3,14 +3,25 @@ from __future__ import annotations
|
||||
|
||||
from typing import Any
|
||||
|
||||
from pydantic import Field
|
||||
|
||||
from langchain.chains.llm import LLMChain
|
||||
from langchain.evaluation.qa.generate_prompt import PROMPT
|
||||
from langchain.output_parsers.regex import RegexParser
|
||||
from langchain.schema.language_model import BaseLanguageModel
|
||||
from langchain.schema.output_parser import BaseLLMOutputParser
|
||||
|
||||
_QA_OUTPUT_PARSER = RegexParser(
|
||||
regex=r"QUESTION: (.*?)\n+ANSWER: (.*)", output_keys=["query", "answer"]
|
||||
)
|
||||
|
||||
|
||||
class QAGenerateChain(LLMChain):
|
||||
"""LLM Chain for generating examples for question answering."""
|
||||
|
||||
output_parser: BaseLLMOutputParser = Field(default=_QA_OUTPUT_PARSER)
|
||||
output_key: str = "qa_pairs"
|
||||
|
||||
@classmethod
|
||||
def from_llm(cls, llm: BaseLanguageModel, **kwargs: Any) -> QAGenerateChain:
|
||||
"""Load QA Generate Chain from LLM."""
|
||||
|
@ -17,9 +17,7 @@ These questions should be detailed and be based explicitly on information in the
|
||||
<Begin Document>
|
||||
{doc}
|
||||
<End Document>"""
|
||||
output_parser = RegexParser(
|
||||
regex=r"QUESTION: (.*?)\n+ANSWER: (.*)", output_keys=["query", "answer"]
|
||||
)
|
||||
PROMPT = PromptTemplate(
|
||||
input_variables=["doc"], template=template, output_parser=output_parser
|
||||
input_variables=["doc"],
|
||||
template=template,
|
||||
)
|
||||
|
@ -8,10 +8,25 @@ import pytest
|
||||
from langchain.evaluation.comparison.eval_chain import (
|
||||
LabeledPairwiseStringEvalChain,
|
||||
PairwiseStringEvalChain,
|
||||
resolve_pairwise_criteria,
|
||||
)
|
||||
from langchain.evaluation.criteria.eval_chain import Criteria
|
||||
from tests.unit_tests.llms.fake_llm import FakeLLM
|
||||
|
||||
|
||||
@pytest.mark.parametrize("criterion", list(Criteria))
|
||||
def test_resolve_criteria_enum(criterion: Criteria) -> None:
|
||||
val = resolve_pairwise_criteria(criterion)
|
||||
assert isinstance(val, dict)
|
||||
assert next(iter(val)) == criterion.value
|
||||
|
||||
|
||||
def test_resolve_criteria_list_enum() -> None:
|
||||
val = resolve_pairwise_criteria(list(Criteria))
|
||||
assert isinstance(val, dict)
|
||||
assert set(val.keys()) == set(c.value for c in list(Criteria))
|
||||
|
||||
|
||||
def test_pairwise_string_comparison_chain() -> None:
|
||||
llm = FakeLLM(
|
||||
queries={
|
||||
|
@ -13,7 +13,7 @@ from langchain.evaluation.schema import StringEvaluator
|
||||
from tests.unit_tests.llms.fake_llm import FakeLLM
|
||||
|
||||
|
||||
def test_resolve_criteria() -> None:
|
||||
def test_resolve_criteria_str() -> None:
|
||||
# type: ignore
|
||||
assert CriteriaEvalChain.resolve_criteria("helpfulness") == {
|
||||
"helpfulness": _SUPPORTED_CRITERIA[Criteria.HELPFULNESS]
|
||||
@ -23,6 +23,13 @@ def test_resolve_criteria() -> None:
|
||||
}
|
||||
|
||||
|
||||
@pytest.mark.parametrize("criterion", list(Criteria))
|
||||
def test_resolve_criteria_enum(criterion: Criteria) -> None:
|
||||
assert CriteriaEvalChain.resolve_criteria(criterion) == {
|
||||
criterion.value: _SUPPORTED_CRITERIA[criterion]
|
||||
}
|
||||
|
||||
|
||||
def test_criteria_eval_chain() -> None:
|
||||
chain = CriteriaEvalChain.from_llm(
|
||||
llm=FakeLLM(
|
||||
|
Loading…
Reference in New Issue
Block a user