mirror of
https://github.com/hwchase17/langchain
synced 2024-11-06 03:20:49 +00:00
aa439ac2ff
Right now, eval chains require an answer for every question. It's cumbersome to collect this ground truth so getting around this issue with 2 things: * Adding a context param in `ContextQAEvalChain` and simply evaluating if the question is answered accurately from context * Adding chain of though explanation prompting to improve the accuracy of this w/o GT. This also gets to feature parity with openai/evals which has the same contextual eval w/o GT. TODO in follow-up: * Better prompt inheritance. No need for seperate prompt for CoT reasoning. How can we merge them together --------- Co-authored-by: Vashisht Madhavan <vashishtmadhavan@Vashs-MacBook-Pro.local> |
||
---|---|---|
.. | ||
agents | ||
callbacks | ||
chains | ||
data | ||
docstore | ||
document_loader | ||
evaluation | ||
llms | ||
output_parsers | ||
prompts | ||
tools | ||
utilities | ||
__init__.py | ||
test_bash.py | ||
test_formatting.py | ||
test_python.py | ||
test_sql_database_schema.py | ||
test_sql_database.py | ||
test_text_splitter.py |