mirror of
https://github.com/hwchase17/langchain
synced 2024-10-29 17:07:25 +00:00
f1eaa9b626
Motivation, it seems that when dealing with a long context and "big" number of relevant documents we must avoid using out of the box score ordering from vector stores. See: https://arxiv.org/pdf/2306.01150.pdf So, I added an additional parameter that allows you to reorder the retrieved documents so we can work around this performance degradation. The relevance respect the original search score but accommodates the lest relevant document in the middle of the context. Extract from the paper (one image speaks 1000 tokens): ![image](https://github.com/hwchase17/langchain/assets/1821407/fafe4843-6e18-4fa6-9416-50cc1d32e811) This seems to be common to all diff arquitectures. SO I think we need a good generic way to implement this reordering and run some test in our already running retrievers. It could be that my approach is not the best one from the architecture point of view, happy to have a discussion about that. For me this was the best place to introduce the change and start retesting diff implementations. @rlancemartin, @eyurtsev --------- Co-authored-by: Lance Martin <lance@langchain.dev> |
||
---|---|---|
.. | ||
amazon_kendra_retriever.ipynb | ||
arxiv.ipynb | ||
azure_cognitive_search.ipynb | ||
bm25.ipynb | ||
chaindesk.ipynb | ||
chatgpt-plugin.ipynb | ||
cohere-reranker.ipynb | ||
docarray_retriever.ipynb | ||
elastic_search_bm25.ipynb | ||
knn.ipynb | ||
merger_retriever.ipynb | ||
metal.ipynb | ||
pinecone_hybrid_search.ipynb | ||
pubmed.ipynb | ||
svm.ipynb | ||
tf_idf.ipynb | ||
vespa.ipynb | ||
weaviate-hybrid.ipynb | ||
wikipedia.ipynb | ||
zep_memorystore.ipynb |