Commit Graph

3137 Commits

Author SHA1 Message Date
morgana
722aae4fd1
community: add delete method to rocksetdb vectorstore to support recordmanager (#17030)
- **Description:** This adds a delete method so that rocksetdb can be
used with `RecordManager`.
  - **Issue:** N/A
  - **Dependencies:** N/A
  - **Twitter handle:** `@_morgan_adams_`

---------

Co-authored-by: Rockset API Bot <admin@rockset.io>
2024-02-12 19:50:20 -08:00
Max Jakob
604e117411
docs: another auth method for ElasticsearchStore (#16831)
Users can also use their own Elasticsearch client object to configure
the connection.
2024-02-12 19:29:54 -08:00
Zeeland
4986e7227e
docs: rm unnecessary imports (#16876)
- **Description:** optimize the document of memory usage
  - **Issue:** it lose some install guide
2024-02-12 19:25:54 -08:00
Leonid Ganeline
b87d6f9f48
docs: Redis page update (#16906)
- Reordered sections
- Applied consistent formatting
- Fixed headers (there were 2 H1 headers; this breaks CoT)
- Added `Settings` header and moved all related sections under it
2024-02-12 18:23:35 -08:00
Naveenkhasyap
841e5f514e
docs: Updated doc for integrations/chat/anthropic_functions #15664 (#17226)
Description: Updated doc for integrations/chat/anthropic_functions with
new functions: invoke. Changed structure of the document to match the
required one.
Issue: https://github.com/langchain-ai/langchain/issues/15664
Dependencies: None
Twitter handle: None

---------

Co-authored-by: NaveenMaltesh <naveen@onmeta.in>
2024-02-12 17:09:38 -08:00
Min-Seong Lee
ce9a68791b
docs: fix typo in question_answering quickstart.ipynb (#17393)
- **Description:** typo in docs (facillitate -> facilitate)
  - **Issue:** Typo
  - **Dependencies:** Nope
  - **Twitter handle:** None
2024-02-12 16:33:47 -08:00
Pennlaine
e1bc623f8f
docs: Updated docs for sitemap loader to use correct URL (#17395)
- **Description:** 
Updated URL for sitemap loader from
"https://langchain.readthedocs.io/sitemap.xml" to
"https://api.python.langchain.com/sitemap.xml"
  - **Issue:** Fixes #17236
2024-02-12 16:20:32 -08:00
Ikko Eltociear Ashimine
b48fa8b695
docs: fix typo in vikingdb.ipynb (#17429)
retreival -> retrieval
2024-02-12 15:51:12 -08:00
Abhijeeth Padarthi
584b647b96
community[minor]: AWS Athena Document Loader (#15625)
- **Description:** Adds the document loader for [AWS
Athena](https://aws.amazon.com/athena/), a serverless and interactive
analytics service.
  - **Dependencies:** Added boto3 as a dependency
2024-02-12 12:53:40 -08:00
ByeongUk Choi
ac970c9497
Update Docs for TFIDFRetriever Import Path (#17322)
This PR updates the `TF-IDF.ipynb` documentation to reflect the new
import path for TFIDFRetriever in the langchain-community package. The
previous path, `from langchain.retrievers import TFIDFRetriever`, has
been updated to `from langchain_community.retrievers import
TFIDFRetriever` to align with the latest changes in the langchain
library.
2024-02-11 21:26:08 -08:00
Michael Hunger
1c902ce3d1
tools:docs: update google_search.ipynb - change tool name (#17354)
according to https://youtu.be/rZus0JtRqXE?si=aFo1JTDnu5kSEiEN&t=678 by
@efriis

- **Description:** Seems the requirements for tool names have changed
and spaces are no longer allowed. Changed the tool name from Google
Search to google_search in the notebook
  - **Issue:** n/a
  - **Dependencies:** none
  - **Twitter handle:** @mesirii
2024-02-11 21:25:19 -08:00
jiangzf93
d6a1c88ca7
docs: update documentation for file system tool integration (#17377)
- **Description:** Update the docs for the tool integration module `file
system`
- **Issue:** [For New Contributors: Update Integration Documentation
#15664](https://github.com/langchain-ai/langchain/issues/15664#top)
  - **Dependencies:** N/A
2024-02-11 21:19:40 -08:00
Pennlaine
2384267900
Updated doc for tools/pubmed with new functions: invoke. (#17378)
Updated doc for integrations/chat/anthropic_functions #15664 

  - **Description:**
Adds `pip install` instructions
Update `run` with `invoke`

  - **Issue:** 
Fixes #15664
2024-02-11 21:19:31 -08:00
Erick Friis
3a2eb6e12b
infra: add print rule to ruff (#16221)
Added noqa for existing prints. Can slowly remove / will prevent more
being intro'd
2024-02-09 16:13:30 -08:00
Quang Hoa
54c1fb3f25
community[patch]: Make some functions work with Milvus (#10695)
**Description**
Make some functions work with Milvus:
1. get_ids: Get primary keys by field in the metadata
2. delete: Delete one or more entities by ids
3. upsert: Update/Insert one or more entities

**Issue**
None
**Dependencies**
None
**Tag maintainer:**
@hwchase17 
**Twitter handle:**
None

---------

Co-authored-by: HoaNQ9 <hoanq.1811@gmail.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-02-09 15:21:31 -08:00
Charlie Marsh
24c0bab57b
infra, multiple: Upgrade configuration for Ruff v0.2.0 (#16905)
## Summary

This PR upgrades LangChain's Ruff configuration in preparation for
Ruff's v0.2.0 release. (The changes are compatible with Ruff v0.1.5,
which LangChain uses today.) Specifically, we're now warning when
linter-only options are specified under `[tool.ruff]` instead of
`[tool.ruff.lint]`.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-02-09 14:28:02 -08:00
Vadim Kudlay
5f9ac6986e
nvidia-ai-endpoints[patch]: model arguments (e.g. temperature) on construction bug (#17290)
- **Issue:** Issue with model argument support (been there for a while
actually):
- Non-specially-handled arguments like temperature don't work when
passed through constructor.
- Such arguments DO work quite well with `bind`, but also do not abide
by field requirements.
- Since initial push, server-side error messages have gotten better and
v0.0.2 raises better exceptions. So maybe it's better to let server-side
handle such issues?
- **Description:**
- Removed ChatNVIDIA's argument fields in favor of
`model_kwargs`/`model_kws` arguments which aggregates constructor kwargs
(from constructor pathway) and merges them with call kwargs (bind
pathway).
- Shuffled a few functions from `_NVIDIAClient` to `ChatNVIDIA` to
streamline construction for future integrations.
- Minor/Optional: Old services didn't have stop support, so client-side
stopping was implemented. Now do both.
- **Any Breaking Changes:** Minor breaking changes if you strongly rely
on chat_model.temperature, etc. This is captured by
chat_model.model_kwargs.

PR passes tests and example notebooks and example testing. Still gonna
chat with some people, so leaving as draft for now.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-02-09 13:46:02 -08:00
Leonid Ganeline
389b055bd6
docs: Toolkits menu (#16217)
The Integrations `Toolkits` menu was named as [`Agents and
toolkits`](https://python.langchain.com/docs/integrations/toolkits).
This name has a historical reason that is not correct anymore. Now this
menu is all about community `Toolkits`. There is a separate menu for
[Agents](https://python.langchain.com/docs/modules/agents/). Also Agents
are officially not part of Integrations (Community package) but part of
LangChain package.
2024-02-08 14:52:26 -08:00
Scott Nath
a32798abd7
community: Add you.com utility, update you retriever integration docs (#17014)
<!-- Thank you for contributing to LangChain!

Please title your PR "<package>: <description>", where <package> is
whichever of langchain, community, core, experimental, etc. is being
modified.

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes if applicable,
  - **Dependencies:** any dependencies required for this change,
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` from the root
of the package you've modified to check this locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc: https://python.langchain.com/docs/contributing/

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->

- **Description: changes to you.com files** 
    - general cleanup
- adds community/utilities/you.py, moving bulk of code from retriever ->
utility
    - removes `snippet` as endpoint
    - adds `news` as endpoint
    - adds more tests

<s>**Description: update community MAKE file** 
    - adds `integration_tests`
    - adds `coverage`</s>

- **Issue:** the issue # it fixes if applicable,
- [For New Contributors: Update Integration
Documentation](https://github.com/langchain-ai/langchain/issues/15664#issuecomment-1920099868)
- **Dependencies:** n/a
- **Twitter handle:** @scottnath
- **Mastodon handle:** scottnath@mastodon.social

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-02-08 13:47:50 -08:00
joelsprunger
3984f6604f
langchain: adds recursive json splitter (#17144)
- **Description:** This adds a recursive json splitter class to the
existing text_splitters as well as unit tests
- **Issue:** splitting text from structured data can cause issues if you
have a large nested json object and you split it as regular text you may
end up losing the structure of the json. To mitigate against this you
can split the nested json into large chunks and overlap them, but this
causes unnecessary text processing and there will still be times where
the nested json is so big that the chunks get separated from the parent
keys.

As an example you wouldn't want the following to be split in half:
```shell
{'val0': 'DFWeNdWhapbR',
 'val1': {'val10': 'QdJo',
          'val11': 'FWSDVFHClW',
          'val12': 'bkVnXMMlTiQh',
          'val13': 'tdDMKRrOY',
          'val14': 'zybPALvL',
          'val15': 'JMzGMNH',
          'val16': {'val160': 'qLuLKusFw',
                    'val161': 'DGuotLh',
                    'val162': 'KztlcSBropT',
-----------------------------------------------------------------------split-----
                    'val163': 'YlHHDrN',
                    'val164': 'CtzsxlGBZKf',
                    'val165': 'bXzhcrWLmBFp',
                    'val166': 'zZAqC',
                    'val167': 'ZtyWno',
                    'val168': 'nQQZRsLnaBhb',
                    'val169': 'gSpMbJwA'},
          'val17': 'JhgiyF',
          'val18': 'aJaqjUSFFrI',
          'val19': 'glqNSvoyxdg'}}
```
Any llm processing the second chunk of text may not have the context of
val1, and val16 reducing accuracy. Embeddings will also lack this
context and this makes retrieval less accurate.

Instead you want it to be split into chunks that retain the json
structure.
```shell
{'val0': 'DFWeNdWhapbR',
 'val1': {'val10': 'QdJo',
          'val11': 'FWSDVFHClW',
          'val12': 'bkVnXMMlTiQh',
          'val13': 'tdDMKRrOY',
          'val14': 'zybPALvL',
          'val15': 'JMzGMNH',
          'val16': {'val160': 'qLuLKusFw',
                    'val161': 'DGuotLh',
                    'val162': 'KztlcSBropT',
                    'val163': 'YlHHDrN',
                    'val164': 'CtzsxlGBZKf'}}}
```
and
```shell
{'val1':{'val16':{
                    'val165': 'bXzhcrWLmBFp',
                    'val166': 'zZAqC',
                    'val167': 'ZtyWno',
                    'val168': 'nQQZRsLnaBhb',
                    'val169': 'gSpMbJwA'},
          'val17': 'JhgiyF',
          'val18': 'aJaqjUSFFrI',
          'val19': 'glqNSvoyxdg'}}
```
This recursive json text splitter does this. Values that contain a list
can be converted to dict first by using split(... convert_lists=True)
otherwise long lists will not be split and you may end up with chunks
larger than the max chunk.

In my testing large json objects could be split into small chunks with 
   Increased question answering accuracy
 The ability to split into smaller chunks meant retrieval queries can
use fewer tokens


- **Dependencies:** json import added to text_splitter.py, and random
added to the unit test
  - **Twitter handle:** @joelsprunger

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2024-02-08 13:45:34 -08:00
Schalkje
f0ada1a396
docs: Update quickstart.mdx - Fix 422 error in example with LangServe client code (#17163)
**Description:**: Fix 422 error in example with LangServe client code

httpx.HTTPStatusError: Client error '422 Unprocessable Entity' for url
'http://localhost:8000/agent/invoke'
2024-02-08 13:35:39 -08:00
Kartheek Yakkala
3a22157d92
docs: Added LCEL for alibabacloud and anyscale (#17252)
---------

Co-authored-by: KARTHEEK YAKKALA <kartheekyakkala@KARTHEEKs-Air.lan>
Co-authored-by: KARTHEEK YAKKALA <kartheekyakkala.se@gmail.com>
2024-02-08 13:18:09 -08:00
Neli Hateva
9bb5157a3d
langchain[patch], community[patch]: Fixes in the Ontotext GraphDB Graph and QA Chain (#17239)
- **Description:** Fixes in the Ontotext GraphDB Graph and QA Chain
related to the error handling in case of invalid SPARQL queries, for
which `prepareQuery` doesn't throw an exception, but the server returns
400 and the query is indeed invalid
  - **Issue:** N/A
  - **Dependencies:** N/A
  - **Twitter handle:** @OntotextGraphDB
2024-02-08 12:05:43 -08:00
Jorge Campo
88609565a3
docs: Fix typo in github.ipynb (#17259)
'agiven' -> 'a given'
2024-02-08 12:03:00 -08:00
Bagatur
00a09e1b71
docs: use PromptTemplate.from_template (#17218)
Ran
```python
import glob
import re

def update_prompt(x):
    return re.sub(
        r"(?P<start>\b)PromptTemplate\(template=(?P<template>.*), input_variables=(?:.*)\)",
        "\g<start>PromptTemplate.from_template(\g<template>)",
        x
    )


for fn in glob.glob("docs/**/*", recursive=True):
    try:
        content = open(fn).readlines()
    except:
        continue
    content = [update_prompt(l) for l in content]
    with open(fn, "w") as f:
        f.write("".join(content))
```
2024-02-07 19:52:42 -08:00
sana-google
7f55c95790
docs: add missing link to Quickstart (#17085)
Replace this entire comment with:
- **Description:** Added missing link for Quickstart in Model IO
documentation,
  - **Issue:** N/A,
  - **Dependencies:** N/A,
  - **Twitter handle:** N/A

<!--
If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-02-07 22:26:10 -05:00
Eugene Yurtsev
780e84ae79
community[minor]: SQLDatabase Add fetch mode cursor, query parameters, query by selectable, expose execution options, and documentation (#17191)
- **Description:** Improve `SQLDatabase` adapter component to promote
code re-use, see
[suggestion](https://github.com/langchain-ai/langchain/pull/16246#pullrequestreview-1846590962).
  - **Needed by:** GH-16246
  - **Addressed to:** @baskaryan, @cbornet 

## Details
- Add `cursor` fetch mode
- Accept SQL query parameters
- Accept both `str` and SQLAlchemy selectables as query expression
- Expose `execution_options`
- Documentation page (notebook) about `SQLDatabase` [^1]
See [About
SQLDatabase](https://github.com/langchain-ai/langchain/blob/c1c7b763/docs/docs/integrations/tools/sql_database.ipynb).

[^1]: Apparently there hasn't been any yet?

---------

Co-authored-by: Andreas Motl <andreas.motl@crate.io>
2024-02-07 22:23:43 -05:00
Leonid Ganeline
d903fa313e
docs: titles fix (#17206)
Several notebooks have Title != file name. That results in corrupted
sorting in Navbar (ToC).
- Fixed titles and file names.
- Changed text formats to the consistent form
- Redirected renamed files in the `Vercel.json`
2024-02-07 22:09:34 -05:00
Bagatur
aeb6b38901
docs: cleanup fleet integration (#17214)
Causing search issues
2024-02-07 17:18:48 -08:00
Leonid Ganeline
5ceaf784f3
docs Integraions/Components menu reordered (#17151)
This PR is opinionated.
- Moved `Embedding models` item to place after `LLMs` and `Chat model`,
so all items with models are together.
- Renamed `Text embedding models` to `Embedding models`. Now, it is
shorter and easier to read. `Text` is obvious from context. The same as
the `Text LLMs` vs. `LLMs` (we also have multi-modal LLMs).
2024-02-06 20:33:41 -08:00
Leonid Ganeline
0af0fc5d25
docs integraions/providers nav fix (#17148)
Issue: `Provides` page is presented as the index page (on the
`Providers` item) and as the `Providers/Providers` item. The latter
should not be in the menu. See the picture.

![image](https://github.com/langchain-ai/langchain/assets/2256422/6894023f-f13a-4f0d-8fe2-ed5b0ae2bdd2)
This PR fixes this.
2024-02-06 20:33:14 -08:00
Leonid Ganeline
bf55279d39
docs: tutorials update (#17132)
Added the course and the one-pager links
2024-02-06 20:30:30 -08:00
Erick Friis
d397721a34
docs: format (#17143) 2024-02-06 16:32:53 -08:00
Arno Schutijzer
863f96b2e0
docs: fix typo in ollama notebook (#17127)
- **Description:** typo fix in ollama notebook
2024-02-06 16:54:40 -05:00
Leonid Ganeline
42c812a549
API References sorted Partner libs menu (#17130)
The `Partner libs` menu is not sorted. Now it is long enough, and items
should be sorted to simplify a package search.
- Sorted items in the `Partner libs` menu
2024-02-06 16:49:23 -05:00
Junyoung Park
1ed73f1992
community[minor]: Add SelfQueryRetriever support to PGVector (#16991)
- **Description:** Add SelfQueryRetriever support to PGVector
  - **Issue:** -
  - **Dependencies:** -
  - **Twitter handle:** -

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-02-06 10:50:50 -08:00
Frank
ef082c77b1
community[minor]: add github file loader to load any github file content b… (#15305)
### Description
support load any github file content based on file extension.  

Why not use [git
loader](https://python.langchain.com/docs/integrations/document_loaders/git#load-existing-repository-from-disk)
?
git loader clones the whole repo even only interested part of files,
that's too heavy. This GithubFileLoader only downloads that you are
interested files.

### Twitter handle
my twitter: @shufanhaotop

---------

Co-authored-by: Hao Fan <h_fan@apple.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-02-06 09:42:33 -08:00
老阿張
ac662b3698
docs: Fix typo in amadeus.ipynb (#16916)
Description: "enviornment should be  environment"? 🤔
Issue: Typo
Dependencies: Nope
Twitter handle: laoazhang
2024-02-06 09:42:05 -08:00
Jan de Boer
2d8015554c
docs: Link to Brave Website added (#16958)
**Description:** Link to the Brave Website added to the
`brave-search.ipynb` notebook.
This notebook is shown in the docs as an example for the brave tool.

**Issue:** There was to reference on where / how to get an api key
 
**Dependencies:** none
 
**Twitter handle:** not for this one :)
2024-02-05 18:29:16 -08:00
os1ma
fd88e0f800
docs: update StreamlitCallbackHandler example (#16970)
- **Description:** docs: update StreamlitCallbackHandler example.
  - **Issue:** None
  - **Dependencies:** None

I have updated the example for StreamlitCallbackHandler in the
documentation bellow.
https://python.langchain.com/docs/integrations/callbacks/streamlit

Previously, the example used `initialize_agent`, which has been
deprecated, so I've updated it to use `create_react_agent` instead. Many
langchain users are likely searching examples of combining
`create_react_agent` or `openai_tools_agent_chain` with
StreamlitCallbackHandler. I'm sure this update will be really helpful
for them!

Unfortunately, writing unit tests for this example is difficult, so I
have not written any tests. I have run this code in a standalone Python
script file and ensured it runs correctly.
2024-02-05 18:20:59 -08:00
Marc Mahe
f08a9139d2
docs: update mistral docs for version 0.1+ (#17011)
**Description:**
Updated integration page for mistralai.
2024-02-05 18:03:12 -08:00
Ikko Eltociear Ashimine
5f5f5acbc5
docs: fix typo in dspy.ipynb (#16996)
langugage -> language
2024-02-05 17:31:06 -08:00
Eugene Yurtsev
609ea019b2
docs: Update streaming documentation (#17066)
Updating streaming documentation following fix of JSON parser for
streaming json.
2024-02-05 17:24:46 -08:00
Bagatur
d8f41d0521
docs: add youtube link (#17065) 2024-02-05 16:12:56 -08:00
Harrison Chase
83fbf0e11a
docs: add structured tools howto to agents (#15772)
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-02-05 15:53:01 -08:00
Alex Boury
334b6ebdf3
community[minor]: Breebs docs retriever (#16578)
- **Description:** Implementation of breeb retriever with integration
tests ->
libs/community/tests/integration_tests/retrievers/test_breebs.py and
documentation (notebook) ->
docs/docs/integrations/retrievers/breebs.ipynb.
  - **Dependencies:** None
2024-02-05 15:51:08 -08:00
Nova Kwok
eb7b05885f
docs: Fix typo in quickstart.ipynb (#16859)
- **Description:** "load HTML **form** web URLs" should be "load HTML
**from** web URLs"? 🤔
  - **Issue:** Typo
  - **Dependencies:** Nope
  - **Twitter handle:** n0vad3v
2024-02-05 15:50:11 -08:00
Shorthills AI
cf0b29b6d2
docs: fixing a minor grammatical mistake (#16931) 2024-02-05 15:49:47 -08:00
Shivani Modi
fcb875629d
docs: Updating documentation for Konko provider (#16953)
- **Description:** A small update to the Konko provider documentation.

---------

Co-authored-by: Shivani Modi <shivanimodi@Shivanis-MacBook-Pro.local>
2024-02-05 15:49:13 -08:00
Benjamin Muskalla
973ba0d84b
docs: Fix Copilot name (#16956)
The official name is "GitHub Copilot"
2024-02-05 15:48:47 -08:00
IMRAN KHAN
4b17699818
docs: add 2 more tutorials to the list in youtube.mdx (#16998)
- **Description:** add 2 more tutorials to the list in youtube.mdx, 
  - **Twitter handle:** EhThing
2024-02-05 15:48:34 -08:00
Supreet Takkar
ae33979813
community[patch]: Allow adding ARNs as model_id to support Amazon Bedrock custom models (#16800)
- **Description:** Adds an additional class variable to `BedrockBase`
called `provider` that allows sending a model provider such as amazon,
cohere, ai21, etc.
Up until now, the model provider is extracted from the `model_id` using
the first part before the `.`, such as `amazon` for
`amazon.titan-text-express-v1` (see [supported list of Bedrock model IDs
here](https://docs.aws.amazon.com/bedrock/latest/userguide/model-ids-arns.html)).
But for custom Bedrock models where the ARN of the provisioned
throughput must be supplied, the `model_id` is like
`arn:aws:bedrock:...` so the `model_id` cannot be extracted from this. A
model `provider` is required by the LangChain Bedrock class to perform
model-based processing. To allow the same processing to be performed for
custom-models of a specific base model type, passing this `provider`
argument can help solve the issues.
The alternative considered here was the use of
`provider.arn:aws:bedrock:...` which then requires ARN to be extracted
and passed separately when invoking the model. The proposed solution
here is simpler and also does not cause issues for current models
already using the Bedrock class.
  - **Issue:** N/A
  - **Dependencies:** N/A

---------

Co-authored-by: Piyush Jain <piyushjain@duck.com>
2024-02-05 14:28:03 -08:00
Vadim Kudlay
75b6fa1134
nvidia-ai-endpoints[patch]: Support User-Agent metadata and minor fixes. (#16942)
- **Description:** Several meta/usability updates, including User-Agent.
  - **Issue:** 
- User-Agent metadata for tracking connector engagement. @milesial
please check and advise.
- Better error messages. Tries harder to find a request ID. @milesial
requested.
- Client-side image resizing for multimodal models. Hope to upgrade to
Assets API solution in around a month.
- `client.payload_fn` allows you to modify payload before network
request. Use-case shown in doc notebook for kosmos_2.
- `client.last_inputs` put back in to allow for advanced
support/debugging.
  - **Dependencies:** 
- Attempts to pull in PIL for image resizing. If not installed, prints
out "please install" message, warns it might fail, and then tries
without resizing. We are waiting on a more permanent solution.

For LC viz: @hinthornw 
For NV viz: @fciannella @milesial @vinaybagade

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-02-05 12:24:53 -08:00
Erick Friis
6ffd5b15bc
pinecone: init pkg (#16556)
<!-- Thank you for contributing to LangChain!

Please title your PR "<package>: <description>", where <package> is
whichever of langchain, community, core, experimental, etc. is being
modified.

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes if applicable,
  - **Dependencies:** any dependencies required for this change,
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` from the root
of the package you've modified to check this locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc: https://python.langchain.com/docs/contributing/

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->
2024-02-05 11:55:01 -08:00
Erick Friis
db6af21395
docs: exa contents (#16555) 2024-02-05 11:15:06 -08:00
Nicolas Grenié
54fcd476bb
docs: Update ollama examples with new community libraries (#17007)
- **Description:** Updating one line code sample for Ollama with new
**langchain_community** package
  - **Issue:**
  - **Dependencies:** none
  - **Twitter handle:**  @picsoung
2024-02-04 15:13:29 -08:00
Erick Friis
afdd636999
docs: partner packages (#16960) 2024-02-02 15:12:21 -08:00
Ashley Xu
66adb95284
docs: BigQuery Vector Search went public review and updated docs (#16896)
Update the docs for BigQuery Vector Search
2024-02-02 10:26:44 -08:00
Massimiliano Pronesti
71f9ea33b6
docs: add quantization to vllm and update API (#16950)
- **Description:** Update vLLM docs to include instructions on how to
use quantized models, as well as to replace the deprecated methods.
2024-02-02 10:24:49 -08:00
Radhakrishnan
3b0fa9079d
docs: Updated integration doc for aleph alpha (#16844)
Description: Updated doc for llm/aleph_alpha with new functions: invoke.
Changed structure of the document to match the required one.
Issue: https://github.com/langchain-ai/langchain/issues/15664
Dependencies: None
Twitter handle: None

---------

Co-authored-by: Radhakrishnan Iyer <radhakrishnan.iyer@ibm.com>
2024-02-02 09:28:06 -08:00
Erick Friis
6fc2835255
docs: fix broken links (#16855) 2024-02-01 17:29:38 -08:00
Erick Friis
b1a847366c
community: revert SQL Stores (#16912)
This reverts commit cfc225ecb3.


https://github.com/langchain-ai/langchain/pull/15909#issuecomment-1922418097

These will have existed in langchain-community 0.0.16 and 0.0.17.
2024-02-01 16:37:40 -08:00
akira wu
f7c709b40e
doc: fix typo in message_history.ipynb (#16877)
- **Description:** just fixed a small typo in the documentation in the
`expression_language/how_to/message_history` session
[here](https://python.langchain.com/docs/expression_language/how_to/message_history)
2024-02-01 13:30:29 -08:00
Shorthills AI
0bca0f4c24
Docs: Fixed grammatical mistake (#16858)
Co-authored-by: Vishal <141389263+VishalYadavShorthillsAI@users.noreply.github.com>
Co-authored-by: Sanskar Tanwar <142409040+SanskarTanwarShorthillsAI@users.noreply.github.com>
Co-authored-by: UpneetShorthillsAI <144228282+UpneetShorthillsAI@users.noreply.github.com>
Co-authored-by: HarshGuptaShorthillsAI <144897987+HarshGuptaShorthillsAI@users.noreply.github.com>
Co-authored-by: AdityaKalraShorthillsAI <143726711+AdityaKalraShorthillsAI@users.noreply.github.com>
Co-authored-by: SakshiShorthillsAI <144228183+SakshiShorthillsAI@users.noreply.github.com>
Co-authored-by: AashiGuptaShorthillsAI <144897730+AashiGuptaShorthillsAI@users.noreply.github.com>
Co-authored-by: ShamshadAhmedShorthillsAI <144897733+ShamshadAhmedShorthillsAI@users.noreply.github.com>
Co-authored-by: ManpreetShorthillsAI <142380984+ManpreetShorthillsAI@users.noreply.github.com>
Co-authored-by: Aayush <142384656+AayushShorthillsAI@users.noreply.github.com>
Co-authored-by: BajrangBishnoiShorthillsAi <148060486+BajrangBishnoiShorthillsAi@users.noreply.github.com>
2024-02-01 11:28:15 -08:00
Harel Gal
93366861c7
docs: Indicated Guardrails for Amazon Bedrock preview status (#16769)
Added notification about limited preview status of Guardrails for Amazon
Bedrock feature to code example.

---------

Co-authored-by: Piyush Jain <piyushjain@duck.com>
2024-02-01 10:41:48 -08:00
Erick Friis
17e886388b
nomic: init pkg (#16853)
Co-authored-by: Lance Martin <lance@langchain.dev>
2024-01-31 16:46:35 -08:00
Bagatur
b0347f3e2b
docs: add csv use case (#16756) 2024-01-30 09:39:46 -08:00
Jacob Lee
c6724a39f4
Fix rephrase step in chatbot use case (#16763) 2024-01-29 23:25:25 -08:00
Bob Lin
546b757303
community: Add ChatGLM3 (#15265)
Add [ChatGLM3](https://github.com/THUDM/ChatGLM3) and updated
[chatglm.ipynb](https://python.langchain.com/docs/integrations/llms/chatglm)

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2024-01-29 20:30:52 -08:00
Marina Pliusnina
a1ce7ab672
adding parameter for changing the language in SpacyEmbeddings (#15743)
Description: Added the parameter for a possibility to change a language
model in SpacyEmbeddings. The default value is still the same:
"en_core_web_sm", so it shouldn't affect a code which previously did not
specify this parameter, but it is not hard-coded anymore and easy to
change in case you want to use it with other languages or models.

Issue: At Barcelona Supercomputing Center in Aina project
(https://github.com/projecte-aina), a project for Catalan Language
Models and Resources, we would like to use Langchain for one of our
current projects and we would like to comment that Langchain, while
being a very powerful and useful open-source tool, is pretty much
focused on English language. We would like to contribute to make it a
bit more adaptable for using with other languages.

Dependencies: This change requires the Spacy library and a language
model, specified in the model parameter.

Tag maintainer: @dev2049

Twitter handle: @projecte_aina

---------

Co-authored-by: Marina Pliusnina <marina.pliusnina@bsc.es>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2024-01-29 20:30:34 -08:00
baichuan-assistant
f8f2649f12
community: Add Baichuan LLM to community (#16724)
Replace this entire comment with:
- **Description:** Add Baichuan LLM to integration/llm, also updated
related docs.

Co-authored-by: BaiChuanHelper <wintergyc@WinterGYCs-MacBook-Pro.local>
2024-01-29 20:08:24 -08:00
thiswillbeyourgithub
1d082359ee
community: add support for callable filters in FAISS (#16190)
- **Description:**
Filtering in a FAISS vectorstores is very inflexible and doesn't allow
that many use case. I think supporting callable like this enables a lot:
regular expressions, condition on multiple keys etc. **Note** I had to
manually alter a test. I don't understand if it was falty to begin with
or if there is something funky going on.
- **Issue:** None
- **Dependencies:** None
- **Twitter handle:** None

Signed-off-by: thiswillbeyourgithub <26625900+thiswillbeyourgithub@users.noreply.github.com>
2024-01-29 20:05:56 -08:00
Jacob Lee
4a027e622f
docs[patch]: Lower temperature in chatbot usecase notebooks for consistency (#16750)
CC @baskaryan
2024-01-29 17:27:13 -08:00
Jacob Lee
12d2b2ebcf
docs[minor]: LCEL rewrite of chatbot use-case (#16414)
CC @baskaryan @hwchase17

TODO:
- [x] Draft of main quickstart
- [x] Index intro page
- [x] Add subpage guide for Memory management
- [x] Add subpage guide for Retrieval
- [x] Add subpage guide for Tool usage
- [x] Add LangSmith traces illustrating query transformation
2024-01-29 17:08:54 -08:00
Bassem Yacoube
85e93e05ed
community[minor]: Update OctoAI LLM, Embedding and documentation (#16710)
This PR includes updates for OctoAI integrations:
- The LLM class was updated to fix a bug that occurs with multiple
sequential calls
- The Embedding class was updated to support the new GTE-Large endpoint
released on OctoAI lately
- The documentation jupyter notebook was updated to reflect using the
new LLM sdk
Thank you!
2024-01-29 13:57:17 -08:00
Hank
6d6226d96d
docs: Remove accidental extra ``` in QuickStart doc. (#16740)
Description: One too many set of triple-ticks in a sample code block in
the QuickStart doc was causing "\`\`\`shell" to appear in the shell
command that was being demonstrated. I just deleted the extra "```".
Issue: Didn't see one
Dependencies: None
2024-01-29 13:55:26 -08:00
Volodymyr Machula
32c5be8b73
community[minor]: Connery Tool and Toolkit (#14506)
## Summary

This PR implements the "Connery Action Tool" and "Connery Toolkit".
Using them, you can integrate Connery actions into your LangChain agents
and chains.

Connery is an open-source plugin infrastructure for AI.

With Connery, you can easily create a custom plugin with a set of
actions and seamlessly integrate them into your LangChain agents and
chains. Connery will handle the rest: runtime, authorization, secret
management, access management, audit logs, and other vital features.
Additionally, Connery and our community offer a wide range of
ready-to-use open-source plugins for your convenience.

Learn more about Connery:

- GitHub: https://github.com/connery-io/connery-platform
- Documentation: https://docs.connery.io
- Twitter: https://twitter.com/connery_io

## TODOs

- [x] API wrapper
   - [x] Integration tests
- [x] Connery Action Tool
   - [x] Docs
   - [x] Example
   - [x] Integration tests
- [x] Connery Toolkit
  - [x] Docs
  - [x] Example
- [x] Formatting (`make format`)
- [x] Linting (`make lint`)
- [x] Testing (`make test`)
2024-01-29 12:45:03 -08:00
Neli Hateva
c95facc293
langchain[minor], community[minor]: Implement Ontotext GraphDB QA Chain (#16019)
- **Description:** Implement Ontotext GraphDB QA Chain
  - **Issue:** N/A
  - **Dependencies:** N/A
  - **Twitter handle:** @OntotextGraphDB
2024-01-29 12:25:53 -08:00
Kirushikesh DB
47bd58dc11
docs: Added illustration of using RetryOutputParser with LLMChain (#16722)
**Description:**
Updated the retry.ipynb notebook, it contains the illustrations of
RetryOutputParser in LangChain. But the notebook lacks to explain the
compatibility of RetryOutputParser with existing chains. This changes
adds some code to illustrate the workflow of using RetryOutputParser
with the user chain.

Changes:
1. Changed RetryWithErrorOutputParser with RetryOutputParser, as the
markdown text says so.
2. Added code at the last of the notebook to define a chain which passes
the LLM completions to the retry parser, which can be customised for
user needs.

**Issue:** 
Since RetryOutputParser/RetryWithErrorOutputParser does not implement
the parse function it cannot be used with LLMChain directly like
[this](https://python.langchain.com/docs/expression_language/cookbook/prompt_llm_parser#prompttemplate-llm-outputparser).
This also raised various issues #15133 #12175 #11719 still open, instead
of adding new features/code changes its best to explain the "how to
integrate LLMChain with retry parsers" clearly with an example in the
corresponding notebook.

Inspired from:
https://github.com/langchain-ai/langchain/issues/15133#issuecomment-1868972580

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2024-01-29 11:24:52 -08:00
Jael Gu
a1aa3a657c
community[patch]: Milvus supports add & delete texts by ids (#16256)
# Description

To support [langchain
indexing](https://python.langchain.com/docs/modules/data_connection/indexing)
as requested by users, vectorstore Milvus needs to support:
- document addition by id (`add_documents` method with `ids` argument)
- delete by id (`delete` method with `ids` argument)

Example usage:

```python
from langchain.indexes import SQLRecordManager, index
from langchain.schema import Document
from langchain_community.vectorstores import Milvus
from langchain_openai import OpenAIEmbeddings

collection_name = "test_index"
embedding = OpenAIEmbeddings()
vectorstore = Milvus(embedding_function=embedding, collection_name=collection_name)

namespace = f"milvus/{collection_name}"
record_manager = SQLRecordManager(
    namespace, db_url="sqlite:///record_manager_cache.sql"
)
record_manager.create_schema()

doc1 = Document(page_content="kitty", metadata={"source": "kitty.txt"})
doc2 = Document(page_content="doggy", metadata={"source": "doggy.txt"})

index(
    [doc1, doc1, doc2],
    record_manager,
    vectorstore,
    cleanup="incremental",  # None, "incremental", or "full"
    source_id_key="source",
)
```

# Fix issues

Fix https://github.com/milvus-io/milvus/issues/30112

---------

Signed-off-by: Jael Gu <mengjia.gu@zilliz.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-01-29 11:19:50 -08:00
Abhinav
8e44363ec9
langchain_community: Update documentation for installing llama-cpp-python on windows (#16666)
**Description** : This PR updates the documentation for installing
llama-cpp-python on Windows.

- Updates install command to support pyproject.toml
- Makes CPU/GPU install instructions clearer
- Adds reinstall with GPU support command

**Issue**: Existing
[documentation](https://python.langchain.com/docs/integrations/llms/llamacpp#compiling-and-installing)
lists the following commands for installing llama-cpp-python
```
python setup.py clean
python setup.py install
````
The current version of the repo does not include a `setup.py` and uses a
`pyproject.toml` instead.
This can be replaced with
```
python -m pip install -e .
```
As explained in
https://github.com/abetlen/llama-cpp-python/issues/965#issuecomment-1837268339
**Dependencies**: None
**Twitter handle**: None

---------

Co-authored-by: blacksmithop <angstycoder101@gmaii.com>
2024-01-29 08:41:29 -08:00
Benito Geordie
f3fdc5c5da
community: Added integrations for ThirdAI's NeuralDB with Retriever and VectorStore frameworks (#15280)
**Description:** Adds ThirdAI NeuralDB retriever and vectorstore
integration. NeuralDB is a CPU-friendly and fine-tunable text retrieval
engine.
2024-01-29 08:35:42 -08:00
Jonathan Bennion
815896ff13
langchain: pubmed tool path update in doc (#16716)
- **Description:** The current pubmed tool documentation is referencing
the path to langchain core not the path to the tool in community. The
old tool redirects anyways, but for efficiency of using the more direct
path, just adding this documentation so it references the new path
  - **Issue:** doesn't fix an issue
  - **Dependencies:** no dependencies
  - **Twitter handle:** rooftopzen
2024-01-29 08:25:29 -08:00
Lance Martin
1bfadecdd2
Update Slack agent toolkit (#16732)
Co-authored-by: taimoOptTech <132860814+taimo3810@users.noreply.github.com>
2024-01-29 08:03:44 -08:00
Choi JaeHun
ba70630829
docs: Syntax correction according to langchain version update in 'Retry Parser' tutorial example (#16699)
- **Description:** Syntax correction according to langchain version
update in 'Retry Parser' tutorial example,
- **Issue:** #16698

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2024-01-28 16:53:04 -08:00
Bob Lin
0866a984fe
Update n_gpu_layers"s description (#16685)
The `n_gpu_layers` parameter in `llama.cpp` supports the use of `-1`,
which means to offload all layers to the GPU, so the document has been
updated.

Ref:
35918873b4/llama_cpp/server/settings.py (L29C22-L29C117)


35918873b4/llama_cpp/llama.py (L125)
2024-01-28 16:46:50 -08:00
Daniel Erenrich
0600998f38
community: Wikidata tool support (#16691)
- **Description:** Adds Wikidata support to langchain. Can read out
documents from Wikidata.
  - **Issue:** N/A
- **Dependencies:** Adds implicit dependencies for
`wikibase-rest-api-client` (for turning items into docs) and
`mediawikiapi` (for hitting the search endpoint)
  - **Twitter handle:** @derenrich

You can see an example of this tool used in a chain
[here](https://nbviewer.org/urls/d.erenrich.net/upload/Wikidata_Langchain.ipynb)
or
[here](https://nbviewer.org/urls/d.erenrich.net/upload/Wikidata_Lars_Kai_Hansen.ipynb)

<!-- Thank you for contributing to LangChain!


Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` from the root
of the package you've modified to check this locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc: https://python.langchain.com/docs/contributing/

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->
2024-01-28 16:45:21 -08:00
Owen Sims
e451c8adc1
Community: Update Ionic Shopping Docs (#16700)
- **Description:** Update to docs as originally introduced in
https://github.com/langchain-ai/langchain/pull/16649 (reviewed by
@baskaryan),
- **Twitter handle:**
[@ioniccommerce](https://twitter.com/ioniccommerce)
2024-01-28 16:39:49 -08:00
Yelin Zhang
bc7607a4e9
docs: remove iprogress warnings (#16697)
- **Description:** removes iprogress warning texts from notebooks,
resulting in a little nicer to read documentation
2024-01-28 16:38:14 -08:00
ARKA1112
3c387bc12d
docs: Error when importing packages from pydantic [docs] (#16564)
URL : https://python.langchain.com/docs/use_cases/extraction

Desc: 
<b> While the following statement executes successfully, it throws an
error which is described below when we use the imported packages</b>
 ```py 
from pydantic import BaseModel, Field, validator
```
Code: 
```python
from langchain.output_parsers import PydanticOutputParser
from langchain.prompts import (
    PromptTemplate,
)
from langchain_openai import OpenAI
from pydantic import BaseModel, Field, validator

# Define your desired data structure.
class Joke(BaseModel):
    setup: str = Field(description="question to set up a joke")
    punchline: str = Field(description="answer to resolve the joke")

    # You can add custom validation logic easily with Pydantic.
    @validator("setup")
    def question_ends_with_question_mark(cls, field):
        if field[-1] != "?":
            raise ValueError("Badly formed question!")
        return field
```

Error:
```md
PydanticUserError: The `field` and `config` parameters are not available
in Pydantic V2, please use the `info` parameter instead.

For further information visit
https://errors.pydantic.dev/2.5/u/validator-field-config-info
```

Solution:
Instead of doing:
```py
from pydantic import BaseModel, Field, validator
```
We should do:
```py
from langchain_core.pydantic_v1 import BaseModel, Field, validator
```
Thanks.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-01-27 16:46:48 -08:00
Leonid Ganeline
5e73603e8a
docs: DeepInfra provider page update (#16665)
- added description, links
- consistent formatting
- added links to the example pages
2024-01-27 16:05:29 -08:00
Jarod Stewart
0bc397957b
docs: document Ionic Tool (#16649)
- **Description:** Documentation for the Ionic Tool. A shopping
assistant tool that effortlessly adds e-commerce capabilities to your
Agent.
2024-01-26 16:02:07 -08:00
Seungwoo Ryu
570b4f8e66
docs: Update openai_tools.ipynb (#16618)
typo
2024-01-26 15:26:27 -08:00
Callum
6a75ef74ca
docs: Fix typo in XML agent documentation (#16645)
This is a tiny PR that just replacer "moduels" with "modules" in the
documentation for XML agents.
2024-01-26 14:59:46 -08:00
baichuan-assistant
70ff54eace
community[minor]: Add Baichuan Text Embedding Model and Baichuan Inc introduction (#16568)
- **Description:** Adding Baichuan Text Embedding Model and Baichuan Inc
introduction.

Baichuan Text Embedding ranks #1 in C-MTEB leaderboard:
https://huggingface.co/spaces/mteb/leaderboard

Co-authored-by: BaiChuanHelper <wintergyc@WinterGYCs-MacBook-Pro.local>
2024-01-26 12:57:26 -08:00
Ghani
e30c6662df
Langchain-community : EdenAI chat integration. (#16377)
- **Description:** This PR adds [EdenAI](https://edenai.co/) for the
chat model (already available in LLM & Embeddings). It supports all
[ChatModel] functionality: generate, async generate, stream, astream and
batch. A detailed notebook was added.

  - **Dependencies**: No dependencies are added as we call a rest API.

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-01-26 09:56:43 -05:00
Bagatur
61e876aad8
openai[patch]: Explicitly support embedding dimensions (#16596) 2024-01-25 15:16:04 -08:00
Bagatur
6c89507988
docs: add rag citations page (#16549) 2024-01-25 13:51:41 -08:00
Bagatur
db80832e4f
docs: output parser nits (#16588) 2024-01-25 13:20:48 -08:00
Bagatur
ef42d9d559
core[patch], community[patch], openai[patch]: consolidate openai tool… (#16485)
… converters

One way to convert anything to an OAI function:
convert_to_openai_function
One way to convert anything to an OAI tool: convert_to_openai_tool
Corresponding bind functions on OAI models: bind_functions, bind_tools
2024-01-25 13:18:46 -08:00
Brian Burgin
148347e858
community[minor]: Add LiteLLM Router Integration (#15588)
community:

  - **Description:**
- Add new ChatLiteLLMRouter class that allows a client to use a LiteLLM
Router as a LangChain chat model.
- Note: The existing ChatLiteLLM integration did not cover the LiteLLM
Router class.
    - Add tests and Jupyter notebook.
  - **Issue:** None
  - **Dependencies:** Relies on existing ChatLiteLLM integration
  - **Twitter handle:** @bburgin_0

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-01-25 11:03:05 -08:00
Bob Lin
35e60728b7
docs: Fix broken urls (#16559) 2024-01-25 09:20:05 -08:00
Bob Lin
6023953ea7
docs: Fix github link (#16560) 2024-01-25 09:19:09 -08:00
Erick Friis
adc008407e
exa: init pkg (#16553) 2024-01-24 20:57:17 -07:00
Rave Harpaz
c4e9c9ca29
community[minor]: Add OCI Generative AI integration (#16548)
<!-- Thank you for contributing to LangChain!

Please title your PR "<package>: <description>", where <package> is
whichever of langchain, community, core, experimental, etc. is being
modified.

Replace this entire comment with:
- **Description:** Adding Oracle Cloud Infrastructure Generative AI
integration. Oracle Cloud Infrastructure (OCI) Generative AI is a fully
managed service that provides a set of state-of-the-art, customizable
large language models (LLMs) that cover a wide range of use cases, and
which is available through a single API. Using the OCI Generative AI
service you can access ready-to-use pretrained models, or create and
host your own fine-tuned custom models based on your own data on
dedicated AI clusters.
https://docs.oracle.com/en-us/iaas/Content/generative-ai/home.htm
  - **Issue:** None,
  - **Dependencies:** OCI Python SDK,
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` from the root
of the package you've modified to check this locally.
Passed

See contribution guidelines for more information on how to write/run
tests, lint, etc: https://python.langchain.com/docs/contributing/

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.

we provide unit tests. However, we cannot provide integration tests due
to Oracle policies that prohibit public sharing of api keys.
 
If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->

---------

Co-authored-by: Arthur Cheng <arthur.cheng@oracle.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-01-24 18:23:50 -08:00
Leonid Ganeline
f6a05e964b
docs: Hugging Face update (#16490)
- added missed integrations to the platform page
- updated integration examples: added links and fixed formats
2024-01-24 16:59:00 -08:00
Harel Gal
a91181fe6d
community[minor]: add support for Guardrails for Amazon Bedrock (#15099)
Added support for optionally supplying 'Guardrails for Amazon Bedrock'
on both types of model invocations (batch/regular and streaming) and for
all models supported by the Amazon Bedrock service.

@baskaryan  @hwchase17

```python 
llm = Bedrock(model_id="<model_id>", client=bedrock,
                  model_kwargs={},
                  guardrails={"id": " <guardrail_id>",
                              "version": "<guardrail_version>",
                               "trace": True}, callbacks=[BedrockAsyncCallbackHandler()])

class BedrockAsyncCallbackHandler(AsyncCallbackHandler):
    """Async callback handler that can be used to handle callbacks from langchain."""

    async def on_llm_error(
            self,
            error: BaseException,
            **kwargs: Any,
    ) -> Any:
        reason = kwargs.get("reason")
        if reason == "GUARDRAIL_INTERVENED":
           # kwargs contains additional trace information sent by 'Guardrails for Bedrock' service.
            print(f"""Guardrails: {kwargs}""")


# streaming 
llm = Bedrock(model_id="<model_id>", client=bedrock,
                  model_kwargs={},
                  streaming=True,
                  guardrails={"id": "<guardrail_id>",
                              "version": "<guardrail_version>"})
```

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-01-24 14:44:19 -08:00
Martin Kolb
04651f0248
community[minor]: VectorStore integration for SAP HANA Cloud Vector Engine (#16514)
- **Description:**
This PR adds a VectorStore integration for SAP HANA Cloud Vector Engine,
which is an upcoming feature in the SAP HANA Cloud database
(https://blogs.sap.com/2023/11/02/sap-hana-clouds-vector-engine-announcement/).

  - **Issue:** N/A
- **Dependencies:** [SAP HANA Python
Client](https://pypi.org/project/hdbcli/)
  - **Twitter handle:** @sapopensource

Implementation of the integration:
`libs/community/langchain_community/vectorstores/hanavector.py`

Unit tests:
`libs/community/tests/unit_tests/vectorstores/test_hanavector.py`

Integration tests:
`libs/community/tests/integration_tests/vectorstores/test_hanavector.py`

Example notebook:
`docs/docs/integrations/vectorstores/hanavector.ipynb`

Access credentials for execution of the integration tests can be
provided to the maintainers.

---------

Co-authored-by: sascha <sascha.stoll@sap.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-01-24 14:05:07 -08:00
Bob Lin
54dd8e52a8
docs: Updated comments about n_gpu_layers in the Metal section (#16501)
Ref: https://github.com/langchain-ai/langchain/issues/16502
2024-01-24 13:38:48 -08:00
Anastasiia Manokhina
ce595f0203
docs:Updated integration docs structure for chat/google_vertex_ai_palm (#16201)
Description: 

- checked that the doc chat/google_vertex_ai_palm is using new
functions: invoke, stream etc.
- added Gemini example
- fixed wrong output in Sanskrit example

Issue: https://github.com/langchain-ai/langchain/issues/15664
Dependencies: None
Twitter handle: None
2024-01-24 10:21:32 -08:00
Erick Friis
8d299645f9
docs: rm output (#16519) 2024-01-24 10:19:34 -07:00
Lance Martin
0b740ebd49
Update SQL agent toolkit docs (#16409) 2024-01-24 09:03:17 -08:00
Francisco Ingham
13cf4594f4
docs: added a few suggestions for sql docs (#16508) 2024-01-24 08:48:41 -08:00
Eugene Yurtsev
6004e9706f
Docs: Add streaming section (#16468)
Adds a streaming section to LangChain documentation, explaining
`stream`/`astream` API and `astream_events` API.
2024-01-24 10:38:39 -05:00
Tipwheal
66aafc0573
Docs: typo in tool use quick start page (#16494)
Minor typo fix
2024-01-24 10:37:12 -05:00
BeatrixCohere
2b2285dac0
docs: Update cohere rerank and comparison docs (#16198)
- **Description:** Update the cohere rerank docs to use cohere
embeddings
  - **Issue:** n/a
  - **Dependencies:** n/a
  - **Twitter handle:** n/a
2024-01-23 19:39:42 -08:00
Raunak
476bf8b763
community[patch]: Load list of files using UnstructuredFileLoader (#16216)
- **Description:** Updated `_get_elements()` function of
`UnstructuredFileLoader `class to check if the argument self.file_path
is a file or list of files. If it is a list of files then it iterates
over the list of file paths, calls the partition function for each one,
and appends the results to the elements list. If self.file_path is not a
list, it calls the partition function as before.
  
  - **Issue:** Fixed #15607,
  - **Dependencies:** NA
  - **Twitter handle:** NA

Co-authored-by: H161961 <Raunak.Raunak@Honeywell.com>
2024-01-23 19:37:37 -08:00
Xudong Sun
019b6ebe8d
community[minor]: Add iFlyTek Spark LLM chat model support (#13389)
- **Description:** This PR enables LangChain to access the iFlyTek's
Spark LLM via the chat_models wrapper.
  - **Dependencies:** websocket-client ^1.6.1
  - **Tag maintainer:** @baskaryan 

### SparkLLM chat model usage

Get SparkLLM's app_id, api_key and api_secret from [iFlyTek SparkLLM API
Console](https://console.xfyun.cn/services/bm3) (for more info, see
[iFlyTek SparkLLM Intro](https://xinghuo.xfyun.cn/sparkapi) ), then set
environment variables `IFLYTEK_SPARK_APP_ID`, `IFLYTEK_SPARK_API_KEY`
and `IFLYTEK_SPARK_API_SECRET` or pass parameters when using it like the
demo below:

```python3
from langchain.chat_models.sparkllm import ChatSparkLLM

client = ChatSparkLLM(
    spark_app_id="<app_id>",
    spark_api_key="<api_key>",
    spark_api_secret="<api_secret>"
)
```
2024-01-23 19:23:46 -08:00
Eugene Yurtsev
d898d2f07b
docs: Fix version in which astream_events was released (#16481)
Fix typo in version
2024-01-23 18:41:44 -08:00
bu2kx
ff3163297b
community[minor]: Add KDBAI vector store (#12797)
Addition of KDBAI vector store (https://kdb.ai).

Dependencies: `kdbai_client` v0.1.2 Python package.

Sample notebook: `docs/docs/integrations/vectorstores/kdbai.ipynb`

Tag maintainer: @bu2kx
Twitter handle: @kxsystems
2024-01-23 18:37:01 -08:00
JongRok BAEK
4ec3fe4680
docs: Updated integration docs structure for chat/anthropic (#16268)
Description: 
- Added output and environment variables
- Updated the documentation for chat/anthropic, changing references from
`langchain.schema` to `langchain_core.prompts`.

Issue: https://github.com/langchain-ai/langchain/issues/15664
Dependencies: None
Twitter handle: None

Since this is my first open-source PR, please feel free to point out any
mistakes, and I'll be eager to make corrections.
2024-01-23 18:36:28 -08:00
Shivani Modi
4e160540ff
community[minor]: Adding Konko Completion endpoint (#15570)
This PR introduces update to Konko Integration with LangChain.

1. **New Endpoint Addition**: Integration of a new endpoint to utilize
completion models hosted on Konko.

2. **Chat Model Updates for Backward Compatibility**: We have updated
the chat models to ensure backward compatibility with previous OpenAI
versions.

4. **Updated Documentation**: Comprehensive documentation has been
updated to reflect these new changes, providing clear guidance on
utilizing the new features and ensuring seamless integration.

Thank you to the LangChain team for their exceptional work and for
considering this PR. Please let me know if any additional information is
needed.

---------

Co-authored-by: Shivani Modi <shivanimodi@Shivanis-MacBook-Pro.local>
Co-authored-by: Shivani Modi <shivanimodi@Shivanis-MBP.lan>
2024-01-23 18:22:32 -08:00
Facundo Santiago
92e6a641fd
feat: adding paygo api support for Azure ML / Azure AI Studio (#14560)
- **Description:** Introducing support for LLMs and Chat models running
in Azure AI studio and Azure ML using the new deployment mode
pay-as-you-go (model as a service).
- **Issue:** NA
- **Dependencies:** None.
- **Tag maintainer:** @prakharg-msft @gdyre 
- **Twitter handle:** @santiagofacundo

Examples added:
*
[docs/docs/integrations/llms/azure_ml.ipynb](https://github.com/santiagxf/langchain/blob/santiagxf/azureml-endpoints-paygo-community/docs/docs/integrations/chat/azureml_endpoint.ipynb)
*
[docs/docs/integrations/chat/azureml_chat_endpoint.ipynb](https://github.com/santiagxf/langchain/blob/santiagxf/azureml-endpoints-paygo-community/docs/docs/integrations/chat/azureml_chat_endpoint.ipynb)

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2024-01-23 17:08:51 -08:00
baichuan-assistant
20fcd49348
community: Fix Baichuan Chat. (#15207)
- **Description:** Baichuan Chat (with both Baichuan-Turbo and
Baichuan-Turbo-192K models) has updated their APIs. There are breaking
changes. For example, BAICHUAN_SECRET_KEY is removed in the latest API
but is still required in Langchain. Baichuan's Langchain integration
needs to be updated to the latest version.
  - **Issue:** #15206
  - **Dependencies:** None,
  - **Twitter handle:** None

@hwchase17.

Co-authored-by: BaiChuanHelper <wintergyc@WinterGYCs-MacBook-Pro.local>
2024-01-23 17:01:57 -08:00
gcheron
cfc225ecb3
community: SQLStrStore/SQLDocStore provide an easy SQL alternative to InMemoryStore to persist data remotely in a SQL storage (#15909)
**Description:**

- Implement `SQLStrStore` and `SQLDocStore` classes that inherits from
`BaseStore` to allow to persist data remotely on a SQL server.
- SQL is widely used and sometimes we do not want to install a caching
solution like Redis.
- Multiple issues/comments complain that there is no easy remote and
persistent solution that are not in memory (users want to replace
InMemoryStore), e.g.,
https://github.com/langchain-ai/langchain/issues/14267,
https://github.com/langchain-ai/langchain/issues/15633,
https://github.com/langchain-ai/langchain/issues/14643,
https://stackoverflow.com/questions/77385587/persist-parentdocumentretriever-of-langchain
- This is particularly painful when wanting to use
`ParentDocumentRetriever `
- This implementation is particularly useful when:
     * it's expensive to construct an InMemoryDocstore/dict
     * you want to retrieve documents from remote sources
     * you just want to reuse existing objects
- This implementation integrates well with PGVector, indeed, when using
PGVector, you already have a SQL instance running. `SQLDocStore` is a
convenient way of using this instance to store documents associated to
vectors. An integration example with ParentDocumentRetriever and
PGVector is provided in docs/docs/integrations/stores/sql.ipynb or
[here](https://github.com/gcheron/langchain/blob/sql-store/docs/docs/integrations/stores/sql.ipynb).
- It persists `str` and `Document` objects but can be easily extended.

 **Issue:**

Provide an easy SQL alternative to `InMemoryStore`.

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2024-01-23 16:50:48 -08:00
dudgeon
26b2ad6d5b
Fixed typo on quickstart.ipynb (#16482)
- **Description:** Quick typo fix: `inpect` >> `inspect`
  - **Issue:** N/A
  - **Dependencies:** any dependencies required for this change,
  - **Twitter handle:** @geoffdudgeon
2024-01-23 16:50:13 -08:00
Eugene Yurtsev
39d1cbfecf
Docs: Document astream_events API (#16300)
Document astream events API
2024-01-23 12:32:45 -05:00
Florian MOREL
4b7969efc5
community[minor]: New documents loader for visio files (with extension .vsdx) (#16171)
**Description** : New documents loader for visio files (with extension
.vsdx)

A [visio file](https://fr.wikipedia.org/wiki/Microsoft_Visio) (with
extension .vsdx) is associated with Microsoft Visio, a diagram creation
software. It stores information about the structure, layout, and
graphical elements of a diagram. This format facilitates the creation
and sharing of visualizations in areas such as business, engineering,
and computer science.

A Visio file can contain multiple pages. Some of them may serve as the
background for others, and this can occur across multiple layers. This
loader extracts the textual content from each page and its associated
pages, enabling the extraction of all visible text from each page,
similar to what an OCR algorithm would do.

**Dependencies** : xmltodict package
2024-01-22 22:07:03 -08:00
KhoPhi
fb41b68ea1
docs: Update with LCEL examples to Ollama & ChatOllama Integration notebook (#16194)
- **Description:** Updated the Chat/Ollama docs notebook with LCEL chain
examples

- **Issue:**  #15664 I'm a new contributor 😊

- **Dependencies:** No dependencies

- **Twitter handle:** 

Comments:

- How do I truncate the output of the stream in the notebook if and or
when it goes on and on and on for even the basic of prompts?

Edit:

Looking forward to feedback @baskaryan

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-01-22 22:05:59 -08:00
Michael Gorham
3b0226b2c6
docs: Update redis_chat_message_history.ipynb (#16344)
## Problem
Spent several hours trying to figure out how to pass
`RedisChatMessageHistory` as a `GetSessionHistoryCallable` with a
different REDIS hostname. This example kept connecting to
`redis://localhost:6379`, but I wanted to connect to a server not hosted
locally.

## Cause
Assumption the user knows how to implement `BaseChatMessageHistory` and
`GetSessionHistoryCallable`

## Solution
Update documentation to show how to explicitly set the REDIS hostname
using a lambda function much like the MongoDB and SQLite examples.
2024-01-22 21:59:59 -08:00
Ian
c98994c3c9
docs: Improve notebook to show how to use tidb to store history messages (#16420)
After merging [PR
#16304](https://github.com/langchain-ai/langchain/pull/16304), I
realized that our notebook example for integrating TiDB with LangChain
was too basic. To make it more useful and user-friendly, I plan to
create a detailed example. This will show how to use TiDB for saving
history messages in LangChain, offering a clearer, more practical guide
for our users
2024-01-22 21:58:37 -08:00
Eugene Yurtsev
c88750d54b
Docs: Agent streaming notebooks (#15858)
Update information about streaming in the agents section. Show how to
use astream_events to get token by token streaming.
2024-01-22 21:54:55 -05:00
Eugene Yurtsev
e5672bc944
docs: Re-write custom agent to show to write a tools agent (#15907)
Shows how to write a tools agent rather than a functions agent.
2024-01-22 17:28:31 -08:00
Boris Feld
404abf139a
community: Add CometLLM tracing context var (#15765)
I also added LANGCHAIN_COMET_TRACING to enable the CometLLM tracing
integration similar to other tracing integrations. This is easier for
end-users to enable it rather than importing the callback and pass it
manually.

(This is the same content as
https://github.com/langchain-ai/langchain/pull/14650 but rebased and
squashed as something seems to confuse Github Action).
2024-01-22 15:17:16 -08:00
Jennifer Melot
d6275e47f2
docs: Updated integration docs structure for tools/arxiv (#16091) (#16250)
- **Description:** Updated docs for tools/arxiv to use `AgentExecutor`
and `invoke`
  - **Issue:** #15664
  - **Dependencies:** None
  - **Twitter handle:** None
2024-01-22 14:34:22 -08:00
ChengZi
a950fa0487
docs: add milvus multitenancy doc (#16177)
- **Description:** add milvus multitenancy doc, it is an example for
this [pr](https://github.com/langchain-ai/langchain/pull/15740) .
  - **Issue:** No,
  - **Dependencies:** No,
  - **Twitter handle:** No

Signed-off-by: ChengZi <chen.zhang@zilliz.com>
2024-01-22 14:25:26 -08:00
parkererickson-tg
b26a22f307
community[minor]: add TigerGraph support (#16280)
**Description:** Add support for querying TigerGraph databases through
the InquiryAI service.
**Issue**: N/A
**Dependencies:** N/A
**Twitter handle:** @TigerGraphDB
2024-01-22 14:07:44 -08:00
Christophe Bornet
8da34118bc
docs: Add documentation for Cassandra Document Loader (#16282) 2024-01-22 14:06:21 -08:00
Jonathan Algar
774e543e1f
docs: fix formatting issue in rockset.ipynb (#16328)
**Description:** randomly discovered while working on another PR
https://github.com/quarto-dev/quarto-cli/discussions/8131#discussioncomment-8027706

@anubhav94N ICYI
2024-01-22 13:59:45 -08:00
Ian
b9f5104e6c
communty[minor]: Store Message History to TiDB Database (#16304)
This pull request integrates the TiDB database into LangChain for
storing message history, marking one of several steps towards a
comprehensive integration of TiDB with LangChain.


A simple usage
```python
from datetime import datetime
from langchain_community.chat_message_histories import TiDBChatMessageHistory

history = TiDBChatMessageHistory(
    connection_string="mysql+pymysql://<host>:<PASSWORD>@<host>:4000/<db>?ssl_ca=/etc/ssl/cert.pem&ssl_verify_cert=true&ssl_verify_identity=true",
    session_id="code_gen",
    earliest_time=datetime.utcnow(),  # Optional to set earliest_time to load messages after this time point.
)

history.add_user_message("hi! How's feature going?")
history.add_ai_message("It's almot done")
```
2024-01-22 13:56:56 -08:00
Sarthak Chaure
dd5b8107b1
Docs: Updated callbacks/index.mdx (#16404)
The callbacks get started demo code was updated , replacing the
chain.run() command ( which is now depricated) ,with the updated
chain.invoke() command.
Solving the following issue : #16379
Twitter/X : @Hazxhx
2024-01-22 16:10:19 -05:00
Omar-aly
873de14cd8
docs: update vectorstores/llm_rails integration doc (#16199)
Description:
- Updated the docs for the vectorstores integration module
llm_rails.ipynb

Issue:
- [Connected to Issue
#15664](https://github.com/langchain-ai/langchain/issues/15664)
 
Dependencies:
- N/A

Co-authored-by: omaraly23 <112936089+omaraly22@users.noreply.github.com>
2024-01-22 11:40:08 -08:00
Lance Martin
369e90d427
docs: Minor update to Robocorp toolkit docs (#16399) 2024-01-22 11:33:13 -08:00
Hadi
a1c0cf21c9
docs: Update import library for StreamlitCallbackHandler (#16401)
- **Description:** Some code sources have been moved from `langchain` to
`langchain_community` and so the documentation is not yet up-to-date.
This is specifically true for `StreamlitCallbackHandler` which returns a
`warning` message if not loaded from `langchain_community`.,
- **Issue:** I don't see a # issue that could address this problem but
perhaps #10744,
- **Dependencies:** Since it's a documentation change no dependencies
are required
2024-01-22 11:33:00 -08:00
JaguarDB
7ecd2f22ac
community[patch]: update documentation on jaguar vector store (#16346)
- **Description:** update documentation on jaguar vector store:
Instruction for setting up jaguar server and usage of text_tag.
  - **Issue:** 
  - **Dependencies:** 
  - **Twitter handle:**

---------

Co-authored-by: JY <jyjy@jaguardb>
2024-01-22 11:28:38 -08:00
Iskren Ivov Chernev
fc196cab12
community[minor]: DeepInfra support for chat models (#16380)
Add deepinfra chat models support.

This is https://github.com/langchain-ai/langchain/pull/14234 re-opened
from my branch (so maintainers can edit).
2024-01-22 11:22:17 -08:00
Bagatur
eac91b60c9
docs: qa rag nit (#16400) 2024-01-22 11:17:32 -08:00
Bagatur
1dc6c1ce06
core[patch], community[patch], langchain[patch], docs: Update SQL chains/agents/docs (#16168)
Revamp SQL use cases docs. In the process update SQL chains and agents.
2024-01-22 08:19:08 -08:00
Jatin Chawda
05162928c0
Docs: Fixed Urls of AsyncHtmlLoader, AsyncChromiumLoader and HTML2Text links in Web scraping Docs (#16365)
Fixing links in documentation.
2024-01-22 11:03:03 -05:00
Christophe Bornet
f9be877ed7
Docs: Add self-querying retriever and store to AstraDB provider doc (#16362)
Add self-querying retriever and store to AstraDB provider doc
2024-01-22 10:24:28 -05:00
Mateusz Szewczyk
076dbb1a8f
docs: IBM watsonx.ai Use invoke instead of __call__ (#16371)
- **Description:** Updating documentation of IBM
[watsonx.ai](https://www.ibm.com/products/watsonx-ai) LLM with using
`invoke` instead of `__call__`
- **Dependencies:**
[ibm-watsonx-ai](https://pypi.org/project/ibm-watsonx-ai/),
  - **Tag maintainer:** : 

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally. 

The following warning information show when i use `run` and `__call__`
method:
```
LangChainDeprecationWarning: The function `__call__` was deprecated in LangChain 0.1.7 and will be removed in 0.2.0. Use invoke instead.
  warn_deprecated(
```

We need to update documentation for using `invoke` method
2024-01-22 10:22:03 -05:00
Bob Lin
c6bd7778b0
Use invoke instead of __call__ (#16369)
The following warning information will be displayed when i use
`llm(PROMPT)`:

```python
/Users/169/llama.cpp/venv/lib/python3.11/site-packages/langchain_core/_api/deprecation.py:117: LangChainDeprecationWarning: The function `__call__` was deprecated in LangChain 0.1.7 and will be removed in 0.2.0. Use invoke instead.
  warn_deprecated(
```

So I changed to standard usage.
2024-01-22 10:18:43 -05:00
Virat Singh
c2a614eddc
community: Add PolygonLastQuote Tool and Toolkit (#15990)
**Description:** 
In this PR, I am adding a `PolygonLastQuote` Tool, which can be used to
get the latest price quote for a given ticker / stock.

Additionally, I've added a Polygon Toolkit, which we can use to
encapsulate future tools that we build for Polygon.

**Twitter handle:** [@virattt](https://twitter.com/virattt)

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2024-01-21 15:08:55 -08:00
Bagatur
1e29b676d5
core[patch]: simple fallback streaming (#16055) 2024-01-19 16:31:54 -08:00
Hamza Kyamanywa
39b3c6d94c
langchain[patch]: Add konlpy based text splitting for Korean (#16003)
- **Description:** Adds a text splitter based on
[Konlpy](https://konlpy.org/en/latest/#start) which is a Python package
for natural language processing (NLP) of the Korean language. (It is
like Spacy or NLTK for Korean)
- **Dependencies:** Konlpy would have to be installed before this
splitter is used,
  - **Twitter handle:** @untilhamza
2024-01-19 09:44:56 -08:00
Hongyu Lin
9b0a531aa2
doc: Fix small typo in quickstart (#16164)
- **Description:** fix small typo in quickstart

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-01-19 09:44:22 -08:00
Lance Martin
881d1c3ec5
Update MultiON toolkit docs (#16286) 2024-01-19 09:37:20 -08:00
Bagatur
6f7a414955
docs: fix links (#16284) 2024-01-19 08:51:12 -08:00
Lance Martin
f63906a9c2
Test and update MultiON agent toolkit docs (#16235) 2024-01-18 20:24:35 -08:00
Ashley Xu
0f99646ca6
docs: add the enrollment form forBigQueryVectorSearch (#16240)
This PR adds the enrollment form for BigQueryVectorSearch.
2024-01-18 18:34:06 -08:00
Eugene Yurtsev
177af65dc4
core[minor]: RFC Add astream_events to Runnables (#16172)
This PR adds `astream_events` method to Runnables to make it easier to
stream data from arbitrary chains.

* Streaming only works properly in async right now
* One should use `astream()` with if mixing in imperative code as might
be done with tool implementations
* Astream_log has been modified with minimal additive changes, so no
breaking changes are expected
* Underlying callback code / tracing code should be refactored at some
point to handle things more consistently (OK for now)

- ~~[ ] verify event for on_retry~~ does not work until we implement
streaming for retry
- ~~[ ] Any rrenaming? Should we rename "event" to "hook"?~~
- [ ] Any other feedback from community?
- [x] throw NotImplementedError for `RunnableEach` for now

## Example

See this [Example
Notebook](dbbc7fa0d6/docs/docs/modules/agents/how_to/streaming_events.ipynb)
for an example with streaming in the context of an Agent

## Event Hooks Reference

Here is a reference table that shows some events that might be emitted
by the various Runnable objects.
Definitions for some of the Runnable are included after the table.


| event | name | chunk | input | output |

|----------------------|------------------|---------------------------------|-----------------------------------------------|-------------------------------------------------|
| on_chat_model_start | [model name] | | {"messages": [[SystemMessage,
HumanMessage]]} | |
| on_chat_model_stream | [model name] | AIMessageChunk(content="hello")
| | |
| on_chat_model_end | [model name] | | {"messages": [[SystemMessage,
HumanMessage]]} | {"generations": [...], "llm_output": None, ...} |
| on_llm_start | [model name] | | {'input': 'hello'} | |
| on_llm_stream | [model name] | 'Hello' | | |
| on_llm_end | [model name] | | 'Hello human!' |
| on_chain_start | format_docs | | | |
| on_chain_stream | format_docs | "hello world!, goodbye world!" | | |
| on_chain_end | format_docs | | [Document(...)] | "hello world!,
goodbye world!" |
| on_tool_start | some_tool | | {"x": 1, "y": "2"} | |
| on_tool_stream | some_tool | {"x": 1, "y": "2"} | | |
| on_tool_end | some_tool | | | {"x": 1, "y": "2"} |
| on_retriever_start | [retriever name] | | {"query": "hello"} | |
| on_retriever_chunk | [retriever name] | {documents: [...]} | | |
| on_retriever_end | [retriever name] | | {"query": "hello"} |
{documents: [...]} |
| on_prompt_start | [template_name] | | {"question": "hello"} | |
| on_prompt_end | [template_name] | | {"question": "hello"} |
ChatPromptValue(messages: [SystemMessage, ...]) |


Here are declarations associated with the events shown above:

`format_docs`:

```python
def format_docs(docs: List[Document]) -> str:
    '''Format the docs.'''
    return ", ".join([doc.page_content for doc in docs])

format_docs = RunnableLambda(format_docs)
```

`some_tool`:

```python
@tool
def some_tool(x: int, y: str) -> dict:
    '''Some_tool.'''
    return {"x": x, "y": y}
```

`prompt`:

```python
template = ChatPromptTemplate.from_messages(
    [("system", "You are Cat Agent 007"), ("human", "{question}")]
).with_config({"run_name": "my_template", "tags": ["my_template"]})
```
2024-01-18 21:27:01 -05:00
Erick Friis
aa35b43bcd
docs, google-vertex[patch]: function docs (#16231) 2024-01-18 13:15:09 -08:00
Erick Friis
f2b2d59e82
docs: transport and client options docs (#16226)
<!-- Thank you for contributing to LangChain!

Please title your PR "<package>: <description>", where <package> is
whichever of langchain, community, core, experimental, etc. is being
modified.

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes if applicable,
  - **Dependencies:** any dependencies required for this change,
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` from the root
of the package you've modified to check this locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc: https://python.langchain.com/docs/contributing/

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->
2024-01-18 12:23:04 -08:00
Rajesh Thallam
6bc6d64a12
langchain_google_vertexai[patch]: Add support for SystemMessage for Gemini chat model (#15933)
- **Description:** In Google Vertex AI, Gemini Chat models currently
doesn't have a support for SystemMessage. This PR adds support for it
only if a user provides additional convert_system_message_to_human flag
during model initialization (in this case, SystemMessage would be
prepended to the first HumanMessage). **NOTE:** The implementation is
similar to #14824


- **Twitter handle:** rajesh_thallam

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-01-18 10:22:07 -08:00
jzaldi
ed118950fe
docs: Updated integration docs structure for llm/google_vertex_ai_palm (#16091)
- **Description**: Updated doc for llm/google_vertex_ai_palm with new
functions: `invoke`, `stream`... Changed structure of the document to
match the required one.
- **Issue**: #15664 
- **Dependencies**: None
- **Twitter handle**: None

---------

Co-authored-by: Jorge Zaldívar <jzaldivar@google.com>
2024-01-18 09:45:27 -08:00
Bagatur
aa2e642ce3
docs: tool use nits (#16211) 2024-01-18 09:17:53 -08:00
Eugene Zapolsky
6b9e3ed9e9
google-vertexai[minor]: added safety_settings property to gemini wrapper (#15344)
**Description:** Gemini model has quite annoying default safety_settings
settings. In addition, current VertexAI class doesn't provide a property
to override such settings.
So, this PR aims to 
 - add safety_settings property to VertexAI
- fix issue with incorrect LLM output parsing when LLM responds with
appropriate 'blocked' response
- fix issue with incorrect parsing LLM output when Gemini API blocks
prompt itself as inappropriate
- add safety_settings related tests

I'm not enough familiar with langchain code base and guidelines. So, any
comments and/or suggestions are very welcome.
 
**Issue:** it will likely fix #14841

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-01-18 08:54:30 -08:00
Bagatur
27ad65cc68
docs: add tool use diagrams (#16207) 2024-01-18 07:59:54 -08:00
Bagatur
27ed2673da
docs: model io order (#16163) 2024-01-17 13:13:31 -08:00
Bagatur
2af813c7eb
docs: bump sphinx>=5 (#16162) 2024-01-17 12:57:34 -08:00
David DeCaprio
ec9642d667
docs: Updated MongoDB Chat history example notebook to use LCEL format. (#15750)
- **Description:** Updated the MongoDB example integration notebook to
latest standards
- **Issue:**
[15664](https://github.com/langchain-ai/langchain/issues/15664)
  - **Dependencies:** None
  - **Twitter handle:** @davedecaprio

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2024-01-17 12:07:17 -08:00
Bagatur
e7ddec1f2c
docs: change parallel doc name (#16152) 2024-01-17 10:04:34 -08:00
Joshua Carroll
bc0cb1148a
docs: Fix StreamlitChatMessageHistory docs to latest API (#16072)
- **Description:** Update [this
page](https://python.langchain.com/docs/integrations/memory/streamlit_chat_message_history)
to use the latest API
  - **Issue:** https://github.com/langchain-ai/langchain/issues/13995
  - **Dependencies:** None
  - **Twitter handle:** @OhSynap
2024-01-17 09:42:10 -08:00
David DeCaprio
9c2f1f07a0
docs: Updated SQLite example to use LCEL and SQLChatMessageHistory (#16094)
- **Description:** Updated the SQLite example integration notebook to
latest standards
- **Issue:**
[15664](https://github.com/langchain-ai/langchain/issues/15664)
  - **Dependencies:** None
  - **Twitter handle:** @davedecaprio
2024-01-17 09:39:44 -08:00
Abhinav
da96c511d1
docs: Replace azure_cosmos_db_vector_search with azure_cosmos_db in Cosmos DB Documentation (#16122)
**Description**: This PR fixes an error in the documentation for Azure
Cosmos DB Integration.
**Issue**: The correct way to import `AzureCosmosDBVectorSearch` is
```python
from langchain_community.vectorstores.azure_cosmos_db import (
    AzureCosmosDBVectorSearch,
)
```
While the
[documentation](https://python.langchain.com/docs/integrations/vectorstores/azure_cosmos_db)
states it to be
```python
from langchain_community.vectorstores.azure_cosmos_db_vector_search import (
    AzureCosmosDBVectorSearch,
    CosmosDBSimilarityType,
)
```
As you can see in
[azure_cosmos_db.py](c323742f4f/libs/langchain/langchain/vectorstores/azure_cosmos_db.py (L1C45-L2))
**Dependencies:**: None
**Twitter handle**: None
2024-01-17 09:11:16 -08:00
purificant
3606c5d5e9
infra: update poetry 1.6.1 -> 1.7.1 (#15027) 2024-01-17 08:51:20 -08:00
Ikko Eltociear Ashimine
a35e5f19a8
docs: Update gradient.ipynb (#16149)
Enviroment -> Environment
2024-01-17 08:48:24 -08:00
David
c323742f4f
mistralai[minor]: Add embeddings (#15282)
- **Description:** Adds MistralAIEmbeddings class for embeddings, using
the new official API.
- **Dependencies:** mistralai
- **Tag maintainer**: @efriis, @hwchase17
- **Twitter handle:** @LMS_David_RS

Create `integrations/text_embedding/mistralai.ipynb`: an example
notebook for MistralAIEmbeddings class
Modify `embeddings/__init__.py`: Import the class
Create `embeddings/mistralai.py`: The embedding class
Create `integration_tests/embeddings/test_mistralai.py`: The test file.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-01-16 17:48:37 -08:00
Leonid Ganeline
f974eb5b8b
docs: updated Anyscale page (#16107)
- added description
- fixed broken links
- added setting instructions
- added the Chat model reference
2024-01-16 17:13:51 -08:00
Bagatur
8840a8cc95
docs: tool-use use case (#15783)
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2024-01-16 10:41:14 -08:00
Christophe Bornet
6b6269441c
docs: Add page for AstraDB self retriever (#16077)
Preview:
https://langchain-git-fork-cbornet-astra-self-retriever-docs-langchain.vercel.app/docs/integrations/retrievers/self_query/astradb
2024-01-16 09:50:30 -08:00
Juan Bustos
5f057f24ac
docs: Update elasticsearch.ipynb (#16090)
Fixed a typo, the parameter used for the Elasticsearch API key was
called api_key, but the parameter is called es_api_key.
2024-01-16 09:49:42 -08:00
高远
061e63eef2
community[minor]: add vikingdb vecstore (#15155)
---------

Co-authored-by: gaoyuan <gaoyuan.20001218@bytedance.com>
2024-01-15 12:34:01 -08:00
Zhichao HAN
5cf06db3b3
community[minor]: add JsonRequestsWrapper tool (#15374)
**Description:** This new feature enhances the flexibility of pipeline
integration, particularly when working with RESTful APIs.
``JsonRequestsWrapper`` allows for the decoding of JSON output, instead
of the only option for text output.

---------

Co-authored-by: Zhichao HAN <hanzhichao2000@hotmail.com>
2024-01-15 12:27:19 -08:00
Averi Kitsch
ee378a0f40
docs: add page for Firestore Chat Message History integration (#15554)
- **Description:** Adds documentation for the
`FirestoreChatMessageHistory` integration and lists integration in
Google's documentation
  - **Issue:** NA
  - **Dependencies:** No

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2024-01-15 11:42:33 -08:00
盐粒 Yanli
ddf4e7c633
community[minor]: Update pgvecto_rs to use its high level sdk (#15574)
- **Description:** Update pgvecto_rs to use its high level sdk, 
  - **Issue:** fix #15173
2024-01-15 11:41:59 -08:00
axiangcoding
daa9ccae52
community[patch]: deprecate ErnieBotChat and ErnieEmbeddings classes (#15862)
- **Description:** add deprecated warning for ErnieBotChat and
ErnieEmbeddings.
- These two classes **lack maintenance** and do not use the sdk provided
by qianfan, which means hard to implement some key feature like
streaming.
- The alternative `langchain_community.chat_models.QianfanChatEndpoint`
and `langchain_community.embeddings.QianfanEmbeddingsEndpoint` can
completely replace these two classes, only need to change configuration
items.
  - **Issue:** None,
  - **Dependencies:** None,
  - **Twitter handle:** None

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-01-15 11:14:44 -08:00
Eugene Yurtsev
7c57cfd8f0
docs: Update OpenAI functions agent (#15894)
Add info and a tip explaining when to use this agent.
2024-01-15 11:14:29 -08:00
Eugene Yurtsev
beec7259c8
docs: Add info admonitions to a few agents (#15899)
Add admonitions directly in the agent page to explain constraints and
include a
link to agent types.
2024-01-15 11:14:11 -08:00
Bigtable123
7306032dcf
docs: update baidu_qianfan_endpoint.ipynb doc (#15940)
- **Description:** Updated the docs for the chat integration module
baidu_qianfan_endpoint.ipynb
  - **Issue:**  #15664 
  - **Dependencies:**N/A
2024-01-15 11:13:21 -08:00
Bhadresh Savani
6137c7608d
docs: Integration Documentation updated run to invoke for llms/ai21.ipynb (#15889)
- **Description:** Updated Integration Documentation for
[llms/ai21.ipynb](https://github.com/langchain-ai/langchain/blob/master/docs/docs/integrations/llms/ai21.ipynb)
  - **Issue:** #15664,
  - **Dependencies:** NA,
  - **Twitter handle:** @BhadreshSavani
2024-01-15 10:53:22 -08:00
Massimiliano Pronesti
e80aab2275
docs(community): update Amadeus toolkit to langchain v0.1 (#15976)
- **Description:** docs update following the changes introduced in
#15879

<!-- Thank you for contributing to LangChain!

Please title your PR "<package>: <description>", where <package> is
whichever of langchain, community, core, experimental, etc. is being
modified.

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes if applicable,
  - **Dependencies:** any dependencies required for this change,
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` from the root
of the package you've modified to check this locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc: https://python.langchain.com/docs/contributing/

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->
2024-01-15 10:50:47 -08:00
Ashley Xu
ce7723c1e5
community[minor]: add additional support for BigQueryVectorSearch (#15904)
BigQuery vector search lets you use GoogleSQL to do semantic search,
using vector indexes for fast but approximate results, or using brute
force for exact results.

This PR:
1. Add `metadata[_job_ib]` in Document returned by any similarity search
2. Add `explore_job_stats` to enable users to explore job statistics and
better the debuggability
3. Set the minimum row limit for running create vector index.
2024-01-15 10:45:15 -08:00
Antonio Mindov
fb7e66b809
docs: fix typo in inspect runnables docs (#15994)
- **Description:** Fixing a typo related to prompts in the inspecting
runnables docs
2024-01-15 10:35:26 -08:00
Karim Lalani
14244bd7e5
community[minor]: Added document loader for SurrealDB (#15995)
Added a simple document loader to work with SurrealDB.
2024-01-15 10:32:42 -08:00
Karim Lalani
768e5e33bc
community[minor]: Fix to match SurrealDB 0.3.2 SDK (#15996)
New version of SurrealDB python sdk was causing the integration to
break.
This fix addresses that change.
2024-01-15 10:31:59 -08:00
Bagatur
60d6a416e6
docs: fix self query diagram (#16043) 2024-01-15 10:09:20 -08:00
Mahad
f7706637a8
docs: fix documentation broken link in integrations chroma (#16041)
- **Description:** Fixed broken link in the documentation for Chroma.,
  - **Issue:** 
  - **Dependencies:**
2024-01-15 08:37:03 -08:00
Erick Friis
7b084b4cc7
docs: more pip installs (#15771)
- vertex chat
- google
- some pip openai
- percent and openai
- all percent
- more
- pip
- fmt
- docs: google vertex partner docs
- fmt
- docs: more pip installs
2024-01-12 18:16:00 -08:00
Bagatur
3f75fd41cc
docs: agent table fix (#15964) 2024-01-12 17:54:55 -08:00