langchain/libs/community/langchain_community/chains/graph_qa
Tomaz Bratanic 76a193decc
community[patch]: Add function response to graph cypher qa chain (#22690)
LLMs struggle with Graph RAG, because it's different from vector RAG in
a way that you don't provide the whole context, only the answer and the
LLM has to believe. However, that doesn't really work a lot of the time.
However, if you wrap the context as function response the accuracy is
much better.

btw... `union[LLMChain, Runnable]` is linting fun, that's why so many
ignores
2024-06-10 13:52:17 -07:00
..
__init__.py multiple: langchain 0.2 in master (#21191) 2024-05-08 16:46:52 -04:00
arangodb.py multiple: langchain 0.2 in master (#21191) 2024-05-08 16:46:52 -04:00
base.py multiple: langchain 0.2 in master (#21191) 2024-05-08 16:46:52 -04:00
cypher_utils.py multiple: langchain 0.2 in master (#21191) 2024-05-08 16:46:52 -04:00
cypher.py community[patch]: Add function response to graph cypher qa chain (#22690) 2024-06-10 13:52:17 -07:00
falkordb.py multiple: langchain 0.2 in master (#21191) 2024-05-08 16:46:52 -04:00
gremlin.py multiple: langchain 0.2 in master (#21191) 2024-05-08 16:46:52 -04:00
hugegraph.py multiple: langchain 0.2 in master (#21191) 2024-05-08 16:46:52 -04:00
kuzu.py [community][graph]: Update KuzuQAChain and docs (#21218) 2024-05-13 17:17:14 -07:00
nebulagraph.py multiple: langchain 0.2 in master (#21191) 2024-05-08 16:46:52 -04:00
neptune_cypher.py multiple: langchain 0.2 in master (#21191) 2024-05-08 16:46:52 -04:00
neptune_sparql.py multiple: langchain 0.2 in master (#21191) 2024-05-08 16:46:52 -04:00
ontotext_graphdb.py multiple: langchain 0.2 in master (#21191) 2024-05-08 16:46:52 -04:00
prompts.py [community][graph]: Update KuzuQAChain and docs (#21218) 2024-05-13 17:17:14 -07:00
sparql.py multiple: langchain 0.2 in master (#21191) 2024-05-08 16:46:52 -04:00