mirror of
https://github.com/hwchase17/langchain
synced 2024-10-31 15:20:26 +00:00
3179ee3a56
Still don't have good "how to's", and the guides / examples section could be further pruned and improved, but this PR adds a couple examples for each of the common evaluator interfaces. - [x] Example docs for each implemented evaluator - [x] "how to make a custom evalutor" notebook for each low level APIs (comparison, string, agent) - [x] Move docs to modules area - [x] Link to reference docs for more information - [X] Still need to finish the evaluation index page - ~[ ] Don't have good data generation section~ - ~[ ] Don't have good how to section for other common scenarios / FAQs like regression testing, testing over similar inputs to measure sensitivity, etc.~
446 lines
12 KiB
Plaintext
446 lines
12 KiB
Plaintext
{
|
|
"cells": [
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "480b7cf8",
|
|
"metadata": {},
|
|
"source": [
|
|
"# Question Answering\n",
|
|
"\n",
|
|
"This notebook covers how to evaluate generic question answering problems. This is a situation where you have an example containing a question and its corresponding ground truth answer, and you want to measure how well the language model does at answering those questions."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "78e3023b",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Setup\n",
|
|
"\n",
|
|
"For demonstration purposes, we will just evaluate a simple question answering system that only evaluates the model's internal knowledge. Please see other notebooks for examples where it evaluates how the model does at question answering over data not present in what the model was trained on."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 1,
|
|
"id": "96710d50",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"from langchain.prompts import PromptTemplate\n",
|
|
"from langchain.chains import LLMChain\n",
|
|
"from langchain.llms import OpenAI"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 2,
|
|
"id": "e33ccf00",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"prompt = PromptTemplate(\n",
|
|
" template=\"Question: {question}\\nAnswer:\", input_variables=[\"question\"]\n",
|
|
")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 3,
|
|
"id": "172d993a",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"llm = OpenAI(model_name=\"text-davinci-003\", temperature=0)\n",
|
|
"chain = LLMChain(llm=llm, prompt=prompt)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "0c584440",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Examples\n",
|
|
"For this purpose, we will just use two simple hardcoded examples, but see other notebooks for tips on how to get and/or generate these examples."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 4,
|
|
"id": "87de1d84",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"examples = [\n",
|
|
" {\n",
|
|
" \"question\": \"Roger has 5 tennis balls. He buys 2 more cans of tennis balls. Each can has 3 tennis balls. How many tennis balls does he have now?\",\n",
|
|
" \"answer\": \"11\",\n",
|
|
" },\n",
|
|
" {\n",
|
|
" \"question\": 'Is the following sentence plausible? \"Joao Moutinho caught the screen pass in the NFC championship.\"',\n",
|
|
" \"answer\": \"No\",\n",
|
|
" },\n",
|
|
"]"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "143b1155",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Predictions\n",
|
|
"\n",
|
|
"We can now make and inspect the predictions for these questions."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 5,
|
|
"id": "c7bd809c",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"predictions = chain.apply(examples)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 6,
|
|
"id": "f06dceab",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"[{'text': ' 11 tennis balls'},\n",
|
|
" {'text': ' No, this sentence is not plausible. Joao Moutinho is a professional soccer player, not an American football player, so it is not likely that he would be catching a screen pass in the NFC championship.'}]"
|
|
]
|
|
},
|
|
"execution_count": 6,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"predictions"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "45cc2f9d",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Evaluation\n",
|
|
"\n",
|
|
"We can see that if we tried to just do exact match on the answer answers (`11` and `No`) they would not match what the language model answered. However, semantically the language model is correct in both cases. In order to account for this, we can use a language model itself to evaluate the answers."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 7,
|
|
"id": "0cacc65a",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"from langchain.evaluation.qa import QAEvalChain"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 8,
|
|
"id": "5aa6cd65",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"llm = OpenAI(temperature=0)\n",
|
|
"eval_chain = QAEvalChain.from_llm(llm)\n",
|
|
"graded_outputs = eval_chain.evaluate(\n",
|
|
" examples, predictions, question_key=\"question\", prediction_key=\"text\"\n",
|
|
")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 9,
|
|
"id": "63780020",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"Example 0:\n",
|
|
"Question: Roger has 5 tennis balls. He buys 2 more cans of tennis balls. Each can has 3 tennis balls. How many tennis balls does he have now?\n",
|
|
"Real Answer: 11\n",
|
|
"Predicted Answer: 11 tennis balls\n",
|
|
"Predicted Grade: CORRECT\n",
|
|
"\n",
|
|
"Example 1:\n",
|
|
"Question: Is the following sentence plausible? \"Joao Moutinho caught the screen pass in the NFC championship.\"\n",
|
|
"Real Answer: No\n",
|
|
"Predicted Answer: No, this sentence is not plausible. Joao Moutinho is a professional soccer player, not an American football player, so it is not likely that he would be catching a screen pass in the NFC championship.\n",
|
|
"Predicted Grade: CORRECT\n",
|
|
"\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"for i, eg in enumerate(examples):\n",
|
|
" print(f\"Example {i}:\")\n",
|
|
" print(\"Question: \" + eg[\"question\"])\n",
|
|
" print(\"Real Answer: \" + eg[\"answer\"])\n",
|
|
" print(\"Predicted Answer: \" + predictions[i][\"text\"])\n",
|
|
" print(\"Predicted Grade: \" + graded_outputs[i][\"text\"])\n",
|
|
" print()"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "782ae8c8",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Customize Prompt\n",
|
|
"\n",
|
|
"You can also customize the prompt that is used. Here is an example prompting it using a score from 0 to 10.\n",
|
|
"The custom prompt requires 3 input variables: \"query\", \"answer\" and \"result\". Where \"query\" is the question, \"answer\" is the ground truth answer, and \"result\" is the predicted answer."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"id": "153425c4",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"from langchain.prompts.prompt import PromptTemplate\n",
|
|
"\n",
|
|
"_PROMPT_TEMPLATE = \"\"\"You are an expert professor specialized in grading students' answers to questions.\n",
|
|
"You are grading the following question:\n",
|
|
"{query}\n",
|
|
"Here is the real answer:\n",
|
|
"{answer}\n",
|
|
"You are grading the following predicted answer:\n",
|
|
"{result}\n",
|
|
"What grade do you give from 0 to 10, where 0 is the lowest (very low similarity) and 10 is the highest (very high similarity)?\n",
|
|
"\"\"\"\n",
|
|
"\n",
|
|
"PROMPT = PromptTemplate(\n",
|
|
" input_variables=[\"query\", \"answer\", \"result\"], template=_PROMPT_TEMPLATE\n",
|
|
")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"id": "0a3b0fb7",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"evalchain = QAEvalChain.from_llm(llm=llm, prompt=PROMPT)\n",
|
|
"evalchain.evaluate(\n",
|
|
" examples,\n",
|
|
" predictions,\n",
|
|
" question_key=\"question\",\n",
|
|
" answer_key=\"answer\",\n",
|
|
" prediction_key=\"text\",\n",
|
|
")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "cb1cf335",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Evaluation without Ground Truth\n",
|
|
"Its possible to evaluate question answering systems without ground truth. You would need a `\"context\"` input that reflects what the information the LLM uses to answer the question. This context can be obtained by any retreival system. Here's an example of how it works:"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"id": "6c59293f",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"context_examples = [\n",
|
|
" {\n",
|
|
" \"question\": \"How old am I?\",\n",
|
|
" \"context\": \"I am 30 years old. I live in New York and take the train to work everyday.\",\n",
|
|
" },\n",
|
|
" {\n",
|
|
" \"question\": 'Who won the NFC championship game in 2023?\"',\n",
|
|
" \"context\": \"NFC Championship Game 2023: Philadelphia Eagles 31, San Francisco 49ers 7\",\n",
|
|
" },\n",
|
|
"]\n",
|
|
"QA_PROMPT = \"Answer the question based on the context\\nContext:{context}\\nQuestion:{question}\\nAnswer:\"\n",
|
|
"template = PromptTemplate(input_variables=[\"context\", \"question\"], template=QA_PROMPT)\n",
|
|
"qa_chain = LLMChain(llm=llm, prompt=template)\n",
|
|
"predictions = qa_chain.apply(context_examples)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 12,
|
|
"id": "e500d0cc",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"[{'text': 'You are 30 years old.'},\n",
|
|
" {'text': ' The Philadelphia Eagles won the NFC championship game in 2023.'}]"
|
|
]
|
|
},
|
|
"execution_count": 12,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"predictions"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 9,
|
|
"id": "6d8cbc1d",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"from langchain.evaluation.qa import ContextQAEvalChain\n",
|
|
"\n",
|
|
"eval_chain = ContextQAEvalChain.from_llm(llm)\n",
|
|
"graded_outputs = eval_chain.evaluate(\n",
|
|
" context_examples, predictions, question_key=\"question\", prediction_key=\"text\"\n",
|
|
")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 13,
|
|
"id": "6c5262d0",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"[{'text': ' CORRECT'}, {'text': ' CORRECT'}]"
|
|
]
|
|
},
|
|
"execution_count": 13,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"graded_outputs"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "aaa61f0c",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Comparing to other evaluation metrics\n",
|
|
"We can compare the evaluation results we get to other common evaluation metrics. To do this, let's load some evaluation metrics from HuggingFace's `evaluate` package."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 10,
|
|
"id": "d851453b",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# Some data munging to get the examples in the right format\n",
|
|
"for i, eg in enumerate(examples):\n",
|
|
" eg[\"id\"] = str(i)\n",
|
|
" eg[\"answers\"] = {\"text\": [eg[\"answer\"]], \"answer_start\": [0]}\n",
|
|
" predictions[i][\"id\"] = str(i)\n",
|
|
" predictions[i][\"prediction_text\"] = predictions[i][\"text\"]\n",
|
|
"\n",
|
|
"for p in predictions:\n",
|
|
" del p[\"text\"]\n",
|
|
"\n",
|
|
"new_examples = examples.copy()\n",
|
|
"for eg in new_examples:\n",
|
|
" del eg[\"question\"]\n",
|
|
" del eg[\"answer\"]"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 11,
|
|
"id": "c38eb3e9",
|
|
"metadata": {
|
|
"scrolled": true
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"from evaluate import load\n",
|
|
"\n",
|
|
"squad_metric = load(\"squad\")\n",
|
|
"results = squad_metric.compute(\n",
|
|
" references=new_examples,\n",
|
|
" predictions=predictions,\n",
|
|
")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 12,
|
|
"id": "07d68f85",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"{'exact_match': 0.0, 'f1': 28.125}"
|
|
]
|
|
},
|
|
"execution_count": 12,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"results"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"id": "3b775150",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": []
|
|
}
|
|
],
|
|
"metadata": {
|
|
"kernelspec": {
|
|
"display_name": "Python 3 (ipykernel)",
|
|
"language": "python",
|
|
"name": "python3"
|
|
},
|
|
"language_info": {
|
|
"codemirror_mode": {
|
|
"name": "ipython",
|
|
"version": 3
|
|
},
|
|
"file_extension": ".py",
|
|
"mimetype": "text/x-python",
|
|
"name": "python",
|
|
"nbconvert_exporter": "python",
|
|
"pygments_lexer": "ipython3",
|
|
"version": "3.9.16"
|
|
},
|
|
"vscode": {
|
|
"interpreter": {
|
|
"hash": "53f3bc57609c7a84333bb558594977aa5b4026b1d6070b93987956689e367341"
|
|
}
|
|
}
|
|
},
|
|
"nbformat": 4,
|
|
"nbformat_minor": 5
|
|
}
|