Refactored `requests.py`. The same as
https://github.com/langchain-ai/langchain/pull/7961#8098#8099
requests.py is in the root code folder. This creates the
`langchain.requests: Requests` group on the API Reference navigation
ToC, on the same level as Chains and Agents which is incorrect.
Refactoring:
- copied requests.py content into utils/requests.py
- I added the backwards compatibility ref in the original requests.py.
- updated imports to requests objects
@hwchase17, @baskaryan
Addresses #7578. `run()` can return dictionaries, Pydantic objects or
strings, so the type hints should reflect that. See the chain from
`create_structured_output_chain` for an example of a non-string return
type from `run()`.
I've updated the BaseLLMChain return type hint from `str` to `Any`.
Although, the differences between `run()` and `__call__()` seem less
clear now.
CC: @baskaryan
Co-authored-by: Bagatur <baskaryan@gmail.com>
- Until now, hybrid search was limited to modules requiring external
services, such as Weaviate/Pinecone Hybrid Search. However, I have
developed a hybrid retriever that can merge a list of retrievers using
the [Reciprocal Rank
Fusion](https://plg.uwaterloo.ca/~gvcormac/cormacksigir09-rrf.pdf)
algorithm. This new approach, similar to Weaviate hybrid search, does
not require the initialization of any external service.
- Dependencies: No - Twitter handle: dayuanjian21687
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
- Description: Changed "SELECT" and "UPDTAE" intent check from "=" to
"in",
- Issue: Based on my own testing, most of the LLM (StarCoder, NeoGPT3,
etc..) doesn't return a single word response ("SELECT" / "UPDATE")
through this modification, we can accomplish the same output without
curated prompt engineering.
- Dependencies: None
- Tag maintainer: @baskaryan
- Twitter handle: @aditya_0290
Thank you for maintaining this library, Keep up the good efforts.
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
Stop sequences are useful if you are doing long-running completions and
need to early-out rather than running for the full max_length... not
only does this save inference cost on Replicate, it is also much faster
if you are going to truncate the output later anyway.
Other LLMs support stop sequences natively (e.g. OpenAI) but I didn't
see this for Replicate so adding this via their prediction cancel
method.
Housekeeping: I ran `make format` and `make lint`, no issues reported in
the files I touched.
I did update the replicate integration test and ran `poetry run pytest
tests/integration_tests/llms/test_replicate.py` successfully.
Finally, I am @tjaffri https://twitter.com/tjaffri for feature
announcement tweets... or if you could please tag @docugami
https://twitter.com/docugami we would really appreciate that :-)
Co-authored-by: Taqi Jaffri <tjaffri@docugami.com>
@rlancemartin
The modification includes:
* etherscanLoader
* test_etherscan
* document ipynb
I have run the test, lint, format, and spell check. I do encounter a
linting error on ipynb, I am not sure how to address that.
```
docs/extras/modules/data_connection/document_loaders/integrations/Etherscan.ipynb:55: error: Name "null" is not defined [name-defined]
docs/extras/modules/data_connection/document_loaders/integrations/Etherscan.ipynb:76: error: Name "null" is not defined [name-defined]
Found 2 errors in 1 file (checked 1 source file)
```
- Description: The Etherscan loader uses etherscan api to load
transaction histories under specific accounts on Ethereum Mainnet.
- No dependency is introduced by this PR.
- Twitter handle: glazecl
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
ChatGLM LLM integration will by default accumulate conversation
history(with_history=True) to ChatGLM backend api, which is not expected
in most cases. This PR set with_history=False by default, user should
explicitly set llm.with_history=True to turn this feature on. Related
PR: #8048#7774
---------
Co-authored-by: mlot <limpo2000@gmail.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
My team recently faced an issue while using MSSQL and passing a schema
name.
We noticed that "SET search_path TO {self.schema}" is being called for
us, which is not a valid ms-sql query, and is specific to postgresql
dialect.
We were able to run it locally after this fix.
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
Refactored `example_generator.py`. The same as #7961
`example_generator.py` is in the root code folder. This creates the
`langchain.example_generator: Example Generator ` group on the API
Reference navigation ToC, on the same level as `Chains` and `Agents`
which is not correct.
Refactoring:
- moved `example_generator.py` content into
`chains/example_generator.py` (not in `utils` because the
`example_generator` has dependencies on other LangChain classes. It also
doesn't work for moving into `utilities/`)
- added the backwards compatibility ref in the original
`example_generator.py`
@hwchase17
Refactored `input.py`. The same as
https://github.com/langchain-ai/langchain/pull/7961#8098#8099
input.py is in the root code folder. This creates the `langchain.input:
Input` group on the API Reference navigation ToC, on the same level as
Chains and Agents which is incorrect.
Refactoring:
- copied input.py file into utils/input.py
- I added the backwards compatibility ref in the original input.py.
- changed several imports to a new ref
@hwchase17, @baskaryan
Description:
This PR adds embeddings for LocalAI (
https://github.com/go-skynet/LocalAI ), a self-hosted OpenAI drop-in
replacement. As LocalAI can re-use OpenAI clients it is mostly following
the lines of the OpenAI embeddings, however when embedding documents, it
just uses string instead of sending tokens as sending tokens is
best-effort depending on the model being used in LocalAI. Sending tokens
is also tricky as token id's can mismatch with the model - so it's safer
to just send strings in this case.
Partly related to: https://github.com/hwchase17/langchain/issues/5256
Dependencies: No new dependencies
Twitter: @mudler_it
---------
Signed-off-by: mudler <mudler@localai.io>
Co-authored-by: Bagatur <baskaryan@gmail.com>
**PR Description:**
This pull request introduces several enhancements and new features to
the `CubeSemanticLoader`. The changes include the following:
1. Added imports for the `json` and `time` modules.
2. Added new constructor parameters: `load_dimension_values`,
`dimension_values_limit`, `dimension_values_max_retries`, and
`dimension_values_retry_delay`.
3. Updated the class documentation with descriptions for the new
constructor parameters.
4. Added a new private method `_get_dimension_values()` to retrieve
dimension values from Cube's REST API.
5. Modified the `load()` method to load dimension values for string
dimensions if `load_dimension_values` is set to `True`.
6. Updated the API endpoint in the `load()` method from the base URL to
the metadata endpoint.
7. Refactored the code to retrieve metadata from the response JSON.
8. Added the `column_member_type` field to the metadata dictionary to
indicate if a column is a measure or a dimension.
9. Added the `column_values` field to the metadata dictionary to store
the dimension values retrieved from Cube's API.
10. Modified the `page_content` construction to include the column title
and description instead of the table name, column name, data type,
title, and description.
These changes improve the functionality and flexibility of the
`CubeSemanticLoader` class by allowing the loading of dimension values
and providing more detailed metadata for each document.
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
Refactored `formatting.py`. The same as
https://github.com/langchain-ai/langchain/pull/7961#8098#8099
formatting.py is in the root code folder. This creates the
`langchain.formatting: Formatting` group on the API Reference navigation
ToC, on the same level as Chains and Agents which is incorrect.
Refactoring:
- moved formatting.py content into utils/formatting.py
- I did not add the backwards compatibility ref in the original
formatting.py. It seems unnecessary.
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
- Description: In the llms/__init__.py, the key name is wrong for
mlflowaigateway. It should be mlflow-ai-gateway
- Issue: NA
- Dependencies: NA
- Tag maintainer: @hwchase17, @baskaryan
- Twitter handle: na
Without this fix, when we run the code for mlflowaigateway, we will get
error as below
ValueError: Loading mlflow-ai-gateway LLM not supported
---------
Co-authored-by: rajib76 <rajib76@yahoo.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
Fixes an issue with the github tool where the API returned special
objects but the tool was expecting dictionaries.
Also added proper docstrings to the GitHubAPIWraper methods and a (very
basic) integration test.
Maintainer responsibilities:
- Agents / Tools / Toolkits: @hinthornw
---------
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
# What
- Add faiss vector search test for score threshold
- Fix failing faiss vector search test; filtering with list value is
wrong.
<!-- Thank you for contributing to LangChain!
Replace this comment with:
- Description: Add faiss vector search test for score threshold; Fix
failing faiss vector search test; filtering with list value is wrong.
- Issue: None
- Dependencies: None
- Tag maintainer: @rlancemartin, @eyurtsev
- Twitter handle: @MlopsJ
Please make sure you're PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use.
Maintainer responsibilities:
- General / Misc / if you don't know who to tag: @baskaryan
- DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
- Models / Prompts: @hwchase17, @baskaryan
- Memory: @hwchase17
- Agents / Tools / Toolkits: @hinthornw
- Tracing / Callbacks: @agola11
- Async: @agola11
If no one reviews your PR within a few days, feel free to @-mention the
same people again.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->
Codespaces and devcontainer was broken by the [repo
restructure](https://github.com/langchain-ai/langchain/discussions/8043).
- Description: Add libs/langchain to container so it can be built
without error.
- Issue: -
- Dependencies: -
- Tag maintainer: @hwchase17 @baskaryan
- Twitter handle: @finnless
The failed build log says:
```
#10 [langchain-dev-dependencies 2/2] RUN poetry install --no-interaction --no-ansi --with dev,test,docs
#10 sha256:e850ee99fc966158bfd2d85e82b7c57244f47ecbb1462e75bd83b981a56a1929
2023-07-23 23:30:33.692Z: #10 0.827
#10 0.827 Directory libs/langchain does not exist
2023-07-23 23:30:33.738Z: #10 ERROR: executor failed running [/bin/sh -c poetry install --no-interaction --no-ansi --with dev,test,docs]: exit code: 1
```
The new pyproject.toml imports from libs/langchain:
77bf75c236/pyproject.toml (L14-L16)
But libs/langchain is never added to the dev.Dockerfile:
77bf75c236/libs/langchain/dev.Dockerfile (L37-L39)
This bugfix PR adds kwargs support to Baseten model invocations so that
e.g. the following script works properly:
```python
chatgpt_chain = LLMChain(
llm=Baseten(model="MODEL_ID"),
prompt=prompt,
verbose=False,
memory=ConversationBufferWindowMemory(k=2),
llm_kwargs={"max_length": 4096}
)
```