You cannot select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
langchain/libs/langchain/langchain/agents/chat/output_parser.py

49 lines
1.7 KiB
Python

import json
import re
from typing import Union
from langchain.agents.agent import AgentOutputParser
from langchain.agents.chat.prompt import FORMAT_INSTRUCTIONS
from langchain.schema import AgentAction, AgentFinish, OutputParserException
FINAL_ANSWER_ACTION = "Final Answer:"
class ChatOutputParser(AgentOutputParser):
"""Output parser for the chat agent."""
pattern = re.compile(r"^.*?`{3}(?:json)?\n(.*?)`{3}.*?$", re.DOTALL)
"""Regex pattern to parse the output."""
def get_format_instructions(self) -> str:
return FORMAT_INSTRUCTIONS
def parse(self, text: str) -> Union[AgentAction, AgentFinish]:
Raise an exception in MKRL and Chat Output Parsers if parsing text which contains both an action and a final answer (#5609) Raises exception if OutputParsers receive a response with both a valid action and a final answer Currently, if an OutputParser receives a response which includes both an action and a final answer, they return a FinalAnswer object. This allows the parser to accept responses which propose an action and hallucinate an answer without the action being parsed or taken by the agent. This PR changes the logic to: 1. store a variable checking whether a response contains the `FINAL_ANSWER_ACTION` (this is the easier condition to check). 2. store a variable checking whether the response contains a valid action 3. if both are present, raise a new exception stating that both are present 4. if an action is present, return an AgentAction 5. if an answer is present, return an AgentAnswer 6. if neither is present, raise the relevant exception based around the action format (these have been kept consistent with the prior exception messages) Disclaimer: * Existing mock data included strings which did include an action and an answer. This might indicate that prioritising returning AgentAnswer was always correct, and I am patching out desired behaviour? @hwchase17 to advice. Curious if there are allowed cases where this is not hallucinating, and we do want the LLM to output an action which isn't taken. * I have not passed `send_to_llm` through this new exception Fixes #5601 ## Who can review? Community members can review the PR once tests pass. Tag maintainers/contributors who might be interested: @hwchase17 - project lead @vowelparrot
1 year ago
includes_answer = FINAL_ANSWER_ACTION in text
try:
found = self.pattern.search(text)
if not found:
# Fast fail to parse Final Answer.
raise ValueError("action not found")
action = found.group(1)
response = json.loads(action.strip())
includes_action = "action" in response
Raise an exception in MKRL and Chat Output Parsers if parsing text which contains both an action and a final answer (#5609) Raises exception if OutputParsers receive a response with both a valid action and a final answer Currently, if an OutputParser receives a response which includes both an action and a final answer, they return a FinalAnswer object. This allows the parser to accept responses which propose an action and hallucinate an answer without the action being parsed or taken by the agent. This PR changes the logic to: 1. store a variable checking whether a response contains the `FINAL_ANSWER_ACTION` (this is the easier condition to check). 2. store a variable checking whether the response contains a valid action 3. if both are present, raise a new exception stating that both are present 4. if an action is present, return an AgentAction 5. if an answer is present, return an AgentAnswer 6. if neither is present, raise the relevant exception based around the action format (these have been kept consistent with the prior exception messages) Disclaimer: * Existing mock data included strings which did include an action and an answer. This might indicate that prioritising returning AgentAnswer was always correct, and I am patching out desired behaviour? @hwchase17 to advice. Curious if there are allowed cases where this is not hallucinating, and we do want the LLM to output an action which isn't taken. * I have not passed `send_to_llm` through this new exception Fixes #5601 ## Who can review? Community members can review the PR once tests pass. Tag maintainers/contributors who might be interested: @hwchase17 - project lead @vowelparrot
1 year ago
if includes_answer and includes_action:
raise OutputParserException(
"Parsing LLM output produced a final answer "
f"and a parse-able action: {text}"
)
return AgentAction(
response["action"], response.get("action_input", {}), text
)
except Exception:
Raise an exception in MKRL and Chat Output Parsers if parsing text which contains both an action and a final answer (#5609) Raises exception if OutputParsers receive a response with both a valid action and a final answer Currently, if an OutputParser receives a response which includes both an action and a final answer, they return a FinalAnswer object. This allows the parser to accept responses which propose an action and hallucinate an answer without the action being parsed or taken by the agent. This PR changes the logic to: 1. store a variable checking whether a response contains the `FINAL_ANSWER_ACTION` (this is the easier condition to check). 2. store a variable checking whether the response contains a valid action 3. if both are present, raise a new exception stating that both are present 4. if an action is present, return an AgentAction 5. if an answer is present, return an AgentAnswer 6. if neither is present, raise the relevant exception based around the action format (these have been kept consistent with the prior exception messages) Disclaimer: * Existing mock data included strings which did include an action and an answer. This might indicate that prioritising returning AgentAnswer was always correct, and I am patching out desired behaviour? @hwchase17 to advice. Curious if there are allowed cases where this is not hallucinating, and we do want the LLM to output an action which isn't taken. * I have not passed `send_to_llm` through this new exception Fixes #5601 ## Who can review? Community members can review the PR once tests pass. Tag maintainers/contributors who might be interested: @hwchase17 - project lead @vowelparrot
1 year ago
if not includes_answer:
raise OutputParserException(f"Could not parse LLM output: {text}")
output = text.split(FINAL_ANSWER_ACTION)[-1].strip()
return AgentFinish({"output": output}, text)
@property
def _type(self) -> str:
return "chat"