736a1819aa
"One Retriever to merge them all, One Retriever to expose them, One Retriever to bring them all and in and process them with Document formatters." Hi @dev2049! Here bothering people again! I'm using this simple idea to deal with merging the output of several retrievers into one. I'm aware of DocumentCompressorPipeline and ContextualCompressionRetriever but I don't think they allow us to do something like this. Also I was getting in trouble to get the pipeline working too. Please correct me if i'm wrong. This allow to do some sort of "retrieval" preprocessing and then using the retrieval with the curated results anywhere you could use a retriever. My use case is to generate diff indexes with diff embeddings and sources for a more colorful results then filtering them with one or many document formatters. I saw some people looking for something like this, here: https://github.com/hwchase17/langchain/issues/3991 and something similar here: https://github.com/hwchase17/langchain/issues/5555 This is just a proposal I know I'm missing tests , etc. If you think this is a worth it idea I can work on tests and anything you want to change. Let me know! --------- Co-authored-by: Harrison Chase <hw.chase.17@gmail.com> |
||
---|---|---|
.. | ||
integration_tests | ||
mock_servers | ||
unit_tests | ||
__init__.py | ||
data.py | ||
README.md |
Readme tests(draft)
Integrations Tests
Prepare
This repository contains functional tests for several search engines and databases. The tests aim to verify the correct behavior of the engines and databases according to their specifications and requirements.
To run some integration tests, such as tests located in
tests/integration_tests/vectorstores/
, you will need to install the following
software:
- Docker
- Python 3.8.1 or later
We have optional group test_integration
in the pyproject.toml
file. This group
should contain dependencies for the integration tests and can be installed using the
command:
poetry install --with test_integration
Any new dependencies should be added by running:
# add package and install it after adding:
poetry add tiktoken@latest --group "test_integration" && poetry install --with test_integration
Before running any tests, you should start a specific Docker container that has all the
necessary dependencies installed. For instance, we use the elasticsearch.yml
container
for test_elasticsearch.py
:
cd tests/integration_tests/vectorstores/docker-compose
docker-compose -f elasticsearch.yml up
Prepare environment variables for local testing:
- copy
tests/.env.example
totests/.env
- set variables in
tests/.env
file, e.gOPENAI_API_KEY
Additionally, it's important to note that some integration tests may require certain
environment variables to be set, such as OPENAI_API_KEY
. Be sure to set any required
environment variables before running the tests to ensure they run correctly.
Recording HTTP interactions with pytest-vcr
Some of the integration tests in this repository involve making HTTP requests to external services. To prevent these requests from being made every time the tests are run, we use pytest-vcr to record and replay HTTP interactions.
When running tests in a CI/CD pipeline, you may not want to modify the existing cassettes. You can use the --vcr-record=none command-line option to disable recording new cassettes. Here's an example:
pytest --log-cli-level=10 tests/integration_tests/vectorstores/test_pinecone.py --vcr-record=none
pytest tests/integration_tests/vectorstores/test_elasticsearch.py --vcr-record=none
Run some tests with coverage:
pytest tests/integration_tests/vectorstores/test_elasticsearch.py --cov=langchain --cov-report=html
start "" htmlcov/index.html || open htmlcov/index.html