mirror of
https://github.com/hwchase17/langchain
synced 2024-11-10 01:10:59 +00:00
76a193decc
LLMs struggle with Graph RAG, because it's different from vector RAG in a way that you don't provide the whole context, only the answer and the LLM has to believe. However, that doesn't really work a lot of the time. However, if you wrap the context as function response the accuracy is much better. btw... `union[LLMChain, Runnable]` is linting fun, that's why so many ignores |
||
---|---|---|
.. | ||
__init__.py | ||
test_api.py | ||
test_graph_qa.py | ||
test_llm.py | ||
test_pebblo_retrieval.py |