langchain/tests/unit_tests/evaluation/qa
Vashisht Madhavan aa439ac2ff
Adding an in-context QA evaluation chain + chain of thought reasoning chain for improved accuracy (#2444)
Right now, eval chains require an answer for every question. It's
cumbersome to collect this ground truth so getting around this issue
with 2 things:

* Adding a context param in `ContextQAEvalChain` and simply evaluating
if the question is answered accurately from context
* Adding chain of though explanation prompting to improve the accuracy
of this w/o GT.

This also gets to feature parity with openai/evals which has the same
contextual eval w/o GT.

TODO in follow-up:
* Better prompt inheritance. No need for seperate prompt for CoT
reasoning. How can we merge them together

---------

Co-authored-by: Vashisht Madhavan <vashishtmadhavan@Vashs-MacBook-Pro.local>
2023-04-06 22:32:41 -07:00
..
__init__.py Adding an in-context QA evaluation chain + chain of thought reasoning chain for improved accuracy (#2444) 2023-04-06 22:32:41 -07:00
test_eval_chain.py Adding an in-context QA evaluation chain + chain of thought reasoning chain for improved accuracy (#2444) 2023-04-06 22:32:41 -07:00