mirror of https://github.com/hwchase17/langchain
Add apredict_and_parse to LLM (#2164)
`predict_and_parse` exists, and it's a nice abstraction to allow for applying output parsers to LLM generations. And async is very useful. As an aside, the difference between `call/acall`, `predict/apredict` and `generate/agenerate` isn't entirely clear to me other than they all call into the LLM in slightly different ways. Is there some documentation or a good way to think about these differences? One thought: output parsers should just work magically for all those LLM calls. If the `output_parser` arg is set on the prompt, the LLM has access, so it seems like extra work on the user's end to have to call `output_parser.parse` If this sounds reasonable, happy to throw something together. @hwchase17pull/2176/head
parent
3dc49a04a3
commit
6be67279fb
Loading…
Reference in New Issue