mirror of
https://github.com/hwchase17/langchain
synced 2024-11-06 03:20:49 +00:00
aa439ac2ff
Right now, eval chains require an answer for every question. It's cumbersome to collect this ground truth so getting around this issue with 2 things: * Adding a context param in `ContextQAEvalChain` and simply evaluating if the question is answered accurately from context * Adding chain of though explanation prompting to improve the accuracy of this w/o GT. This also gets to feature parity with openai/evals which has the same contextual eval w/o GT. TODO in follow-up: * Better prompt inheritance. No need for seperate prompt for CoT reasoning. How can we merge them together --------- Co-authored-by: Vashisht Madhavan <vashishtmadhavan@Vashs-MacBook-Pro.local> |
||
---|---|---|
.. | ||
_static | ||
ecosystem | ||
getting_started | ||
modules | ||
reference | ||
tracing | ||
use_cases | ||
conf.py | ||
deployments.md | ||
ecosystem.rst | ||
gallery.rst | ||
glossary.md | ||
index.rst | ||
make.bat | ||
Makefile | ||
model_laboratory.ipynb | ||
reference.rst | ||
requirements.txt | ||
tracing.md |