forked from Archives/langchain
aaac7071a3
Improvements to Deep Lake Vector Store - much faster view loading of embeddings after filters with `fetch_chunks=True` - 2x faster ingestion - use np.float32 for embeddings to save 2x storage, LZ4 compression for text and metadata storage (saves up to 4x storage for text data) - user defined functions as filters Docs - Added retriever full example for analyzing twitter the-algorithm source code with GPT4 - Added a use case for code analysis (please let us know your thoughts how we can improve it) --------- Co-authored-by: Davit Buniatyan <d@activeloop.ai> |
||
---|---|---|
.. | ||
_static | ||
ecosystem | ||
getting_started | ||
modules | ||
reference | ||
tracing | ||
use_cases | ||
conf.py | ||
deployments.md | ||
ecosystem.rst | ||
gallery.rst | ||
glossary.md | ||
index.rst | ||
make.bat | ||
Makefile | ||
model_laboratory.ipynb | ||
reference.rst | ||
requirements.txt | ||
tracing.md |