mirror of
https://github.com/hwchase17/langchain
synced 2024-11-06 03:20:49 +00:00
Add apredict_and_parse to LLM (#2164)
`predict_and_parse` exists, and it's a nice abstraction to allow for applying output parsers to LLM generations. And async is very useful. As an aside, the difference between `call/acall`, `predict/apredict` and `generate/agenerate` isn't entirely clear to me other than they all call into the LLM in slightly different ways. Is there some documentation or a good way to think about these differences? One thought: output parsers should just work magically for all those LLM calls. If the `output_parser` arg is set on the prompt, the LLM has access, so it seems like extra work on the user's end to have to call `output_parser.parse` If this sounds reasonable, happy to throw something together. @hwchase17
This commit is contained in:
parent
3dc49a04a3
commit
6be67279fb
@ -174,6 +174,16 @@ class LLMChain(Chain, BaseModel):
|
||||
else:
|
||||
return result
|
||||
|
||||
async def apredict_and_parse(
|
||||
self, **kwargs: Any
|
||||
) -> Union[str, List[str], Dict[str, str]]:
|
||||
"""Call apredict and then parse the results."""
|
||||
result = await self.apredict(**kwargs)
|
||||
if self.prompt.output_parser is not None:
|
||||
return self.prompt.output_parser.parse(result)
|
||||
else:
|
||||
return result
|
||||
|
||||
def apply_and_parse(
|
||||
self, input_list: List[Dict[str, Any]]
|
||||
) -> Sequence[Union[str, List[str], Dict[str, str]]]:
|
||||
|
Loading…
Reference in New Issue
Block a user