Commit Graph

3013 Commits (29e04454903a9a0f1210ac6e78ee8b8e0aa2ac45)

Author SHA1 Message Date
Henry eaeb8a5f71
langchain[patch]: `output_parser.py` in conversation_chat is customizable (#16945)
**Description:**
With this modification, users can customize the `FORMAT_INSTRUCTIONS`
template, allowing them to create their own prompts

As it is happening in
[this](https://github.com/langchain-ai/langchain/issues/10721) issue,
the `FORMAT_INSTRUCTIONS` is not customizable for the output parser,
unless you create your own class `ConvoOutputParser`. To avoid this, a
modification was done, creating a `format_instruction` variable that
users can customize with ease after initialize the agent.

For example:
```
agent = initialize_agent(
    agent = AgentType.CHAT_CONVERSATIONAL_REACT_DESCRIPTION,
    tools = tools,
    llm = llm_agent,
    verbose = True,
    max_iterations = 3,
    early_stopping_method = 'generate',
    memory = b_w_memory,
    handle_parsing_errors = True,
    agent_kwargs={
        'system_message':PREFIX,
        'human_message':SUFFIX,
        'template_tool_response':TEMPLATE_TOOL_RESPONSE,
        }
)
agent.agent.output_parser.format_instructions = "MY CUSTOM FORMAT INSTRUCTIONS"
print(agent.agent.output_parser.get_format_instructions())
MY CUSTOM FORMAT INSTRUCTIONS
```

Other parameters like `system_message`, `human_message`, or
`template_tool_response` are already customizable and with this PR, the
last parameter `FORMAT_INSTRUCTIONS` in
`langchain.agents.conversational_chat.prompt` can be modified.


**Issue:**
https://github.com/langchain-ai/langchain/issues/10721

**Dependencies:**
No new dependencies required for this change

**Twitter handle:**
With my github user is enough. Thanks

I hope you accept my PR.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
5 months ago
Ryan Kraus f027696b5f
community: Added new Utility runnables for NVIDIA Riva. (#15966)
**Please tag this issue with `nvidia_genai`**

- **Description:** Added new Runnables for integration NVIDIA Riva into
LCEL chains for Automatic Speech Recognition (ASR) and Text To Speech
(TTS).
- **Issue:** N/A
- **Dependencies:** To use these runnables, the NVIDIA Riva client
libraries are required. It they are not installed, an error will be
raised instructing how to install them. The Runnables can be safely
imported without the riva client libraries.
- **Twitter handle:** N/A

All of the Riva Runnables are inside a single folder in the Utilities
module. In this folder are four files:
- common.py - Contains all code that is common to both TTS and ASR
- stream.py - Contains a class representing an audio stream that allows
the end user to put data into the stream like a queue.
- asr.py - Contains the RivaASR runnable
- tts.py - Contains the RivaTTS runnable

The following Python function is an example of creating a chain that
makes use of both of these Runnables:

```python
def create(
    config: Configuration,
    audio_encoding: RivaAudioEncoding,
    sample_rate: int,
    audio_channels: int = 1,
) -> Runnable[ASRInputType, TTSOutputType]:
    """Create a new instance of the chain."""
    _LOGGER.info("Instantiating the chain.")

    # create the riva asr client
    riva_asr = RivaASR(
        url=str(config.riva_asr.service.url),
        ssl_cert=config.riva_asr.service.ssl_cert,
        encoding=audio_encoding,
        audio_channel_count=audio_channels,
        sample_rate_hertz=sample_rate,
        profanity_filter=config.riva_asr.profanity_filter,
        enable_automatic_punctuation=config.riva_asr.enable_automatic_punctuation,
        language_code=config.riva_asr.language_code,
    )

    # create the prompt template
    prompt = PromptTemplate.from_template("{user_input}")

    # model = ChatOpenAI()
    model = ChatNVIDIA(model="mixtral_8x7b")  # type: ignore

    # create the riva tts client
    riva_tts = RivaTTS(
        url=str(config.riva_asr.service.url),
        ssl_cert=config.riva_asr.service.ssl_cert,
        output_directory=config.riva_tts.output_directory,
        language_code=config.riva_tts.language_code,
        voice_name=config.riva_tts.voice_name,
    )

    # construct and return the chain
    return {"user_input": riva_asr} | prompt | model | riva_tts  # type: ignore
```

The following code is an example of creating a new audio stream for
Riva:

```python
input_stream = AudioStream(maxsize=1000)
# Send bytes into the stream
for chunk in audio_chunks:
    await input_stream.aput(chunk)
input_stream.close()
```

The following code is an example of how to execute the chain with
RivaASR and RivaTTS

```python
output_stream = asyncio.Queue()
while not input_stream.complete:
    async for chunk in chain.astream(input_stream):
        output_stream.put(chunk)    
```

Everything should be async safe and thread safe. Audio data can be put
into the input stream while the chain is running without interruptions.

---------

Co-authored-by: Hayden Wolff <hwolff@nvidia.com>
Co-authored-by: Hayden Wolff <hwolff@Haydens-Laptop.local>
Co-authored-by: Hayden Wolff <haydenwolff99@gmail.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
5 months ago
François Paupier 929f071513
community[patch]: Fix error in `LlamaCpp` community LLM with Configurable Fields, 'grammar' custom type not available (#16995)
- **Description:** Ensure the `LlamaGrammar` custom type is always
available when instantiating a `LlamaCpp` LLM
  - **Issue:** #16994 
  - **Dependencies:** None
  - **Twitter handle:** @fpaupier

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
5 months ago
Leonid Ganeline 563f325034
experimental[patch]: fixed import in `experimental` (#17078) 5 months ago
Eugene Yurtsev fbab8baac5
core[patch]: Add astream events config test (#17055)
Verify that astream events propagates config correctly

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
5 months ago
Scott Nath 10bd901139
infra: add integration_tests and coverage to MAKEFILE (#17053)
- **Description: update community MAKE file** 
    - adds `integration_tests`
    - adds `coverage`

- **Issue:** the issue # it fixes if applicable,
    - moving out of https://github.com/langchain-ai/langchain/pull/17014
- **Dependencies:** n/a
- **Twitter handle:** @scottnath
- **Mastodon handle:** scottnath@mastodon.social

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
5 months ago
Giulio Zani 9f0b63dba0
experimental[patch]: Fixes issue #17060 (#17062)
As described in issue #17060, in the case in which text has only one
sentence the following function fails. Checking for that and adding a
return case fixed the issue.

```python
    def split_text(self, text: str) -> List[str]:
        """Split text into multiple components."""
        # Splitting the essay on '.', '?', and '!'
        single_sentences_list = re.split(r"(?<=[.?!])\s+", text)
        sentences = [
            {"sentence": x, "index": i} for i, x in enumerate(single_sentences_list)
        ]
        sentences = combine_sentences(sentences)
        embeddings = self.embeddings.embed_documents(
            [x["combined_sentence"] for x in sentences]
        )
        for i, sentence in enumerate(sentences):
            sentence["combined_sentence_embedding"] = embeddings[i]
        distances, sentences = calculate_cosine_distances(sentences)
        start_index = 0

        # Create a list to hold the grouped sentences
        chunks = []
        breakpoint_percentile_threshold = 95
        breakpoint_distance_threshold = np.percentile(
            distances, breakpoint_percentile_threshold
        )  # If you want more chunks, lower the percentile cutoff

        indices_above_thresh = [
            i for i, x in enumerate(distances) if x > breakpoint_distance_threshold
        ]  # The indices of those breakpoints on your list

        # Iterate through the breakpoints to slice the sentences
        for index in indices_above_thresh:
            # The end index is the current breakpoint
            end_index = index

            # Slice the sentence_dicts from the current start index to the end index
            group = sentences[start_index : end_index + 1]
            combined_text = " ".join([d["sentence"] for d in group])
            chunks.append(combined_text)

            # Update the start index for the next group
            start_index = index + 1

        # The last group, if any sentences remain
        if start_index < len(sentences):
            combined_text = " ".join([d["sentence"] for d in sentences[start_index:]])
            chunks.append(combined_text)
        return chunks
```

Co-authored-by: Giulio Zani <salamanderxing@Giulios-MBP.homenet.telecomitalia.it>
5 months ago
Jimmy Moore 912210ac19
core[patch]: fix _sql_record_manager mypy for #17048 (#17073)
- **Description:** Add relevant type annotations for relevant session
and query objects to resolve mypy errors when `# type: ignore` comments
are removed.
  - **Issue:** #17048
  - **Dependencies:** None,
  - **Twitter handle:** [clesiemo3](https://twitter.com/clesiemo3)
 
I attempted to solve the `UpsertionRecord` ignore but it would require
added a deprecated plugin or moving completely to sqlalchemy 2.0+ from
my understanding. I'm assuming this is not something desired at this
point in time.
5 months ago
William FH 3d5e988c55
Add prompt metadata + tags (#17054) 5 months ago
Bagatur 6e2ed9671f
infra: fix breebs test lint (#17075) 5 months ago
T Cramer cf01fc3790
docs: update parse_partial_json source info (#17036)
- **Description:** Update source-link following recent license update at
open-interpreter project
  - **Issue:** N/A
  - **Dependencies:** None
5 months ago
Alex Boury 334b6ebdf3
community[minor]: Breebs docs retriever (#16578)
- **Description:** Implementation of breeb retriever with integration
tests ->
libs/community/tests/integration_tests/retrievers/test_breebs.py and
documentation (notebook) ->
docs/docs/integrations/retrievers/breebs.ipynb.
  - **Dependencies:** None
5 months ago
Serena Ruan 9b279ac127
community[patch]: MLflow callback update (#16687)
Signed-off-by: Serena Ruan <serena.rxy@gmail.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
5 months ago
Mohammad Mohtashim 3c4b24b69a
community[patch]: Fix the _call of HuggingFaceHub (#16891)
Fixed the following identified issue: #16849

@baskaryan
5 months ago
Tyler Titsworth 304f3f5fc1
community[patch]: Add Progress bar to HuggingFaceEmbeddings (#16758)
- **Description:** Adds a function parameter to HuggingFaceEmbeddings
called `show_progress` that enables a `tqdm` progress bar if enabled.
Does not function if `multi_process = True`.
  - **Issue:** n/a
  - **Dependencies:** n/a
5 months ago
Supreet Takkar ae33979813
community[patch]: Allow adding ARNs as model_id to support Amazon Bedrock custom models (#16800)
- **Description:** Adds an additional class variable to `BedrockBase`
called `provider` that allows sending a model provider such as amazon,
cohere, ai21, etc.
Up until now, the model provider is extracted from the `model_id` using
the first part before the `.`, such as `amazon` for
`amazon.titan-text-express-v1` (see [supported list of Bedrock model IDs
here](https://docs.aws.amazon.com/bedrock/latest/userguide/model-ids-arns.html)).
But for custom Bedrock models where the ARN of the provisioned
throughput must be supplied, the `model_id` is like
`arn:aws:bedrock:...` so the `model_id` cannot be extracted from this. A
model `provider` is required by the LangChain Bedrock class to perform
model-based processing. To allow the same processing to be performed for
custom-models of a specific base model type, passing this `provider`
argument can help solve the issues.
The alternative considered here was the use of
`provider.arn:aws:bedrock:...` which then requires ARN to be extracted
and passed separately when invoking the model. The proposed solution
here is simpler and also does not cause issues for current models
already using the Bedrock class.
  - **Issue:** N/A
  - **Dependencies:** N/A

---------

Co-authored-by: Piyush Jain <piyushjain@duck.com>
5 months ago
T Cramer e022bfaa7d
langchain: add partial parsing support to JsonOutputToolsParser (#17035)
- **Description:** Add partial parsing support to JsonOutputToolsParser
- **Issue:**
[16736](https://github.com/langchain-ai/langchain/issues/16736)

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
5 months ago
calvinweb dcf973c22c
Langchain: `json_chat` don't need stop sequenes (#16335)
This is a PR about #16334
The Stop sequenes isn't meanful in `json_chat` because it depends json
to work, not completions
<!-- Thank you for contributing to LangChain!

Please title your PR "<package>: <description>", where <package> is
whichever of langchain, community, core, experimental, etc. is being
modified.

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes if applicable,
  - **Dependencies:** any dependencies required for this change,
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` from the root
of the package you've modified to check this locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc: https://python.langchain.com/docs/contributing/

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
5 months ago
Bagatur 66e45e8ab7
community[patch]: chat model mypy fixes (#17061)
Related to #17048
5 months ago
Bagatur d93de71d08
community[patch]: chat message history mypy fixes (#17059)
Related to #17048
5 months ago
Bagatur af5ae24af2
community[patch]: callbacks mypy fixes (#17058)
Related to #17048
5 months ago
Vadim Kudlay 75b6fa1134
nvidia-ai-endpoints[patch]: Support User-Agent metadata and minor fixes. (#16942)
- **Description:** Several meta/usability updates, including User-Agent.
  - **Issue:** 
- User-Agent metadata for tracking connector engagement. @milesial
please check and advise.
- Better error messages. Tries harder to find a request ID. @milesial
requested.
- Client-side image resizing for multimodal models. Hope to upgrade to
Assets API solution in around a month.
- `client.payload_fn` allows you to modify payload before network
request. Use-case shown in doc notebook for kosmos_2.
- `client.last_inputs` put back in to allow for advanced
support/debugging.
  - **Dependencies:** 
- Attempts to pull in PIL for image resizing. If not installed, prints
out "please install" message, warns it might fail, and then tries
without resizing. We are waiting on a more permanent solution.

For LC viz: @hinthornw 
For NV viz: @fciannella @milesial @vinaybagade

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
5 months ago
Nuno Campos ae56fd020a
Fix condition on custom root type in runnable history (#17017)
<!-- Thank you for contributing to LangChain!

Please title your PR "<package>: <description>", where <package> is
whichever of langchain, community, core, experimental, etc. is being
modified.

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes if applicable,
  - **Dependencies:** any dependencies required for this change,
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` from the root
of the package you've modified to check this locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc: https://python.langchain.com/docs/contributing/

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->
5 months ago
Nuno Campos f0ffebb944
Shield callback methods from cancellation: Fix interrupted runs marked as pending forever (#17010)
<!-- Thank you for contributing to LangChain!

Please title your PR "<package>: <description>", where <package> is
whichever of langchain, community, core, experimental, etc. is being
modified.

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes if applicable,
  - **Dependencies:** any dependencies required for this change,
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` from the root
of the package you've modified to check this locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc: https://python.langchain.com/docs/contributing/

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->
5 months ago
Bagatur e7b3290d30
community[patch]: fix agent_toolkits mypy (#17050)
Related to #17048
5 months ago
Erick Friis 6ffd5b15bc
pinecone: init pkg (#16556)
<!-- Thank you for contributing to LangChain!

Please title your PR "<package>: <description>", where <package> is
whichever of langchain, community, core, experimental, etc. is being
modified.

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes if applicable,
  - **Dependencies:** any dependencies required for this change,
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` from the root
of the package you've modified to check this locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc: https://python.langchain.com/docs/contributing/

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->
5 months ago
Harrison Chase 4eda647fdd
infra: add -p to mkdir in lint steps (#17013)
Previously, if this did not find a mypy cache then it wouldnt run

this makes it always run

adding mypy ignore comments with existing uncaught issues to unblock other prs

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
5 months ago
Eugene Yurtsev fb245451d2
core[patch]: Add langsmith to printed sys information (#16899) 5 months ago
Mikhail Khludnev 2145636f1d
Nvidia trt model name for stop_stream() (#16997)
just removing some legacy leftover.
5 months ago
Christophe Bornet 2ef69fe11b
Add async methods to BaseChatMessageHistory and BaseMemory (#16728)
Adds:
   * async methods to BaseChatMessageHistory
   * async methods to ChatMessageHistory
   * async methods to BaseMemory
   * async methods to BaseChatMemory
   * async methods to ConversationBufferMemory
   * tests of ConversationBufferMemory's async methods

  **Twitter handle:** cbornet_
5 months ago
Ryan Kraus b3c3b58f2c
core[patch]: Fixed bug in dict to message conversion. (#17023)
- **Description**: We discovered a bug converting dictionaries to
messages where the ChatMessageChunk message type isn't handled. This PR
adds support for that message type.
- **Issue**: #17022 
- **Dependencies**: None
- **Twitter handle**: None
5 months ago
Killinsun - Ryota Takeuchi bcfce146d8
community[patch]: Correct the calling to collection_name in qdrant (#16920)
## Description

In #16608, the calling `collection_name` was wrong.
I made a fix for it. 
Sorry for the inconvenience!

## Issue

https://github.com/langchain-ai/langchain/issues/16962

## Dependencies

N/A



<!-- Thank you for contributing to LangChain!

Please title your PR "<package>: <description>", where <package> is
whichever of langchain, community, core, experimental, etc. is being
modified.

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes if applicable,
  - **Dependencies:** any dependencies required for this change,
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` from the root
of the package you've modified to check this locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc: https://python.langchain.com/docs/contributing/

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->

---------

Co-authored-by: Kumar Shivendu <kshivendu1@gmail.com>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
5 months ago
Erick Friis 849051102a
google-genai[patch]: fix new core typing (#16988) 5 months ago
Bagatur 35446c814e
openai[patch]: rm tiktoken model warning (#16964) 5 months ago
ccurme 0826d87ecd
langchain_mistralai[patch]: Invoke callback prior to yielding token (#16986)
- **Description:** Invoke callback prior to yielding token in stream and
astream methods for ChatMistralAI.
- **Issue:** https://github.com/langchain-ai/langchain/issues/16913
5 months ago
Erick Friis afdd636999
docs: partner packages (#16960) 6 months ago
Erick Friis 06660bc78c
core[patch]: handle some optional cases in tools (#16954)
primary problem in pydantic still exists, where `Optional[str]` gets
turned to `string` in the jsonschema `.schema()`

Also fixes the `SchemaSchema` naming issue

---------

Co-authored-by: William Fu-Hinthorn <13333726+hinthornw@users.noreply.github.com>
6 months ago
Mohammad Mohtashim f8943e8739
core[patch]: Add doc-string to RunnableEach (#16892)
Add doc-string to Runnable Each
---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
Co-authored-by: Eugene Yurtsev <eugene@langchain.dev>
6 months ago
Bagatur 2a510c71a0
core[patch]: doc init positional args (#16854) 6 months ago
Bagatur d80c612c92
core[patch]: Message content as positional arg (#16921) 6 months ago
Bagatur c29e9b6412
core[patch]: fix chat prompt partial messages placeholder var (#16918) 6 months ago
hmasdev cc17334473
core[minor]: add validation error handler to `BaseTool` (#14007)
- **Description:** add a ValidationError handler as a field of
[`BaseTool`](https://github.com/langchain-ai/langchain/blob/master/libs/core/langchain_core/tools.py#L101)
and add unit tests for the code change.
- **Issue:** #12721 #13662
- **Dependencies:** None
- **Tag maintainer:** 
- **Twitter handle:** @hmdev3
- **NOTE:**
  - I'm wondering if the update of document is required.

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
6 months ago
William FH bdacfafa05
core[patch]: Remove deep copying of run prior to submitting it to LangChain Tracing (#16904) 6 months ago
William FH e02efd513f
core[patch]: Hide aliases when serializing (#16888)
Currently, if you dump an object initialized with an alias, we'll still
dump the secret values since they're retained in the kwargs
6 months ago
William FH 131c043864
Fix loading of ImagePromptTemplate (#16868)
We didn't override the namespace of the ImagePromptTemplate, so it is
listed as being in langchain.schema

This updates the mapping to let the loader deserialize.

Alternatively, we could make a slight breaking change and update the
namespace of the ImagePromptTemplate since we haven't broadly
publicized/documented it yet..
6 months ago
Eugene Yurtsev a265878d71
langchain_openai[patch]: Invoke callback prior to yielding token (#16909)
All models should be calling the callback for new token prior to
yielding the token.

Not doing this can cause callbacks for downstream steps to be called
prior to the callback for the new token; causing issues in
astream_events APIs and other things that depend in callback ordering
being correct.

We need to make this change for all chat models.
6 months ago
Erick Friis b1a847366c
community: revert SQL Stores (#16912)
This reverts commit cfc225ecb3.


https://github.com/langchain-ai/langchain/pull/15909#issuecomment-1922418097

These will have existed in langchain-community 0.0.16 and 0.0.17.
6 months ago
Leonid Ganeline c2ca6612fe
refactor `langchain.prompts.example_selector` (#15369)
The `langchain.prompts.example_selector` [still holds several
artifacts](https://api.python.langchain.com/en/latest/langchain_api_reference.html#module-langchain.prompts)
that belongs to `community`. If they moved to
`langchain_community.example_selectors`, the `langchain.prompts`
namespace would be effectively removed which is great.
- moved a class and afunction to `langchain_community`

Note:
- Previously, the `langchain.prompts.example_selector` artifacts were
moved into the `langchain_core.exampe_selectors`. See the flattened
namespace (`.prompts` was removed)!
Similar flattening was implemented for the `langchain_core` as the
`langchain_core.exampe_selectors`.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
6 months ago
Qihui Xie c5b01ac621
community[patch]: support LIKE comparator (full text match) in Qdrant (#12769)
**Description:** 
Support [Qdrant full text match
filtering](https://qdrant.tech/documentation/concepts/filtering/#full-text-match)
by adding Comparator.LIKE to QdrantTranslator.
6 months ago
Christophe Bornet 9d458d089a
community: Factorize AstraDB components constructors (#16779)
* Adds `AstraDBEnvironment` class and use it in `AstraDBLoader`,
`AstraDBCache`, `AstraDBSemanticCache`, `AstraDBBaseStore` and
`AstraDBChatMessageHistory`
* Create an `AsyncAstraDB` if we only have an `AstraDB` and vice-versa
so:
  * we always have an instance of `AstraDB`
* we always have an instance of `AsyncAstraDB` for recent versions of
astrapy
* Create collection if not exists in `AstraDBBaseStore`
* Some typing improvements

Note: `AstraDB` `VectorStore` not using `AstraDBEnvironment` at the
moment. This will be done after the `langchain-astradb` package is out.
6 months ago
Christophe Bornet 78a1af4848
langchain[patch]: Add async methods to MultiVectorRetriever (#16878)
Adds async support to multi vector retriever
6 months ago
Bagatur 7d03d8f586
docs: fix docstring examples (#16889) 6 months ago
Bagatur c2d09fb151
infra: bump exp min test reqs (#16884) 6 months ago
Bagatur 65ba5c220b
experimental[patch]: Release 0.0.50 (#16883) 6 months ago
Bagatur 9e7d9f9390
infra: bump langchain min test reqs (#16882) 6 months ago
Bagatur db442c635b
langchain[patch]: Release 0.1.5 (#16881) 6 months ago
Bagatur 2b4abed25c
commmunity[patch]: Release 0.0.17 (#16871) 6 months ago
Bagatur bb73251146
core[patch]: Release 0.1.18 (#16870) 6 months ago
Christophe Bornet a0ec045495
Add async methods to BaseStore (#16669)
- **Description:**

The BaseStore methods are currently blocking. Some implementations
(AstraDBStore, RedisStore) would benefit from having async methods.
Also once we have async methods for BaseStore, we can implement the
async `aembed_documents` in CacheBackedEmbeddings to cache the
embeddings asynchronously.

* adds async methods amget, amset, amedelete and ayield_keys to
BaseStore
  * implements the async methods for InMemoryStore
  * adds tests for InMemoryStore async methods

- **Twitter handle:** cbornet_
6 months ago
Erick Friis 17e886388b
nomic: init pkg (#16853)
Co-authored-by: Lance Martin <lance@langchain.dev>
6 months ago
Eugene Yurtsev 2e5949b6f8
core(minor): Add bulk add messages to BaseChatMessageHistory interface (#15709)
* Add bulk add_messages method to the interface.
* Update documentation for add_ai_message and add_human_message to
denote them as being marked for deprecation. We should stop using them
as they create more incorrect (inefficient) ways of doing things
6 months ago
Christophe Bornet af8c5c185b
langchain[minor],community[minor]: Add async methods in BaseLoader (#16634)
Adds:
* methods `aload()` and `alazy_load()` to interface `BaseLoader`
* implementation for class `MergedDataLoader `
* support for class `BaseLoader` in async function `aindex()` with unit
tests

Note: this is compatible with existing `aload()` methods that some
loaders already had.

**Twitter handle:** @cbornet_

---------

Co-authored-by: Eugene Yurtsev <eugene@langchain.dev>
6 months ago
Erick Friis c37ca45825
nvidia-trt: remove tritonclient all extra dep (#16749) 6 months ago
Erick Friis bb3b6bde33
openai[minor]: change to secretstr (#16803) 6 months ago
Raphael bf9068516e
community[minor]: add the ability to load existing transcripts from AssemblyAI by their id. (#16051)
- **Description:** the existing AssemblyAI API allows to pass a path or
an url to transcribe an audio file and turn in into Langchain Documents,
this PR allows to get existing transcript by their transcript id and
turn them into Documents.
  - **Issue:** not related to an existing issue
  - **Dependencies:** requests

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
6 months ago
Bagatur daf820c77b
community[patch]: undo create_sql_agent breaking (#16797) 6 months ago
Eugene Yurtsev ef2bd745cb
docs: Update doc-string in base callback managers (#15885)
Update doc-strings with a comment about on_llm_start vs.
on_chat_model_start.
6 months ago
William FH 881dc28d2c
Fix Dep Recommendation (#16793)
Tools are different than functions
6 months ago
Bagatur b0347f3e2b
docs: add csv use case (#16756) 6 months ago
Alexander Conway 4acd2654a3
Report which file was errored on in DirectoryLoader (#16790)
The current implementation leaves it up to the particular file loader
implementation to report the file on which an error was encountered - in
my case pdfminer was simply saying it could not parse a file as a PDF,
but I didn't know which of my hundreds of files it was failing on.

No reason not to log the particular item on which an error was
encountered, and it should be an immense debugging assistant.

<!-- Thank you for contributing to LangChain!

Please title your PR "<package>: <description>", where <package> is
whichever of langchain, community, core, experimental, etc. is being
modified.

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes if applicable,
  - **Dependencies:** any dependencies required for this change,
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` from the root
of the package you've modified to check this locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc: https://python.langchain.com/docs/contributing/

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->
6 months ago
Erick Friis a372b23675
robocorp: release 0.0.3 (#16789) 6 months ago
Rihards Gravis 442fa52b30
[partners]: langchain-robocorp ease dependency version (#16765) 6 months ago
Bob Lin 546b757303
community: Add ChatGLM3 (#15265)
Add [ChatGLM3](https://github.com/THUDM/ChatGLM3) and updated
[chatglm.ipynb](https://python.langchain.com/docs/integrations/llms/chatglm)

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
6 months ago
Marina Pliusnina a1ce7ab672
adding parameter for changing the language in SpacyEmbeddings (#15743)
Description: Added the parameter for a possibility to change a language
model in SpacyEmbeddings. The default value is still the same:
"en_core_web_sm", so it shouldn't affect a code which previously did not
specify this parameter, but it is not hard-coded anymore and easy to
change in case you want to use it with other languages or models.

Issue: At Barcelona Supercomputing Center in Aina project
(https://github.com/projecte-aina), a project for Catalan Language
Models and Resources, we would like to use Langchain for one of our
current projects and we would like to comment that Langchain, while
being a very powerful and useful open-source tool, is pretty much
focused on English language. We would like to contribute to make it a
bit more adaptable for using with other languages.

Dependencies: This change requires the Spacy library and a language
model, specified in the model parameter.

Tag maintainer: @dev2049

Twitter handle: @projecte_aina

---------

Co-authored-by: Marina Pliusnina <marina.pliusnina@bsc.es>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
6 months ago
Christophe Bornet 744070ee85
Add async methods for the AstraDB VectorStore (#16391)
- **Description**: fully async versions are available for astrapy 0.7+.
For older astrapy versions or if the user provides a sync client without
an async one, the async methods will call the sync ones wrapped in
`run_in_executor`
  - **Twitter handle:** cbornet_
6 months ago
baichuan-assistant f8f2649f12
community: Add Baichuan LLM to community (#16724)
Replace this entire comment with:
- **Description:** Add Baichuan LLM to integration/llm, also updated
related docs.

Co-authored-by: BaiChuanHelper <wintergyc@WinterGYCs-MacBook-Pro.local>
6 months ago
thiswillbeyourgithub 1d082359ee
community: add support for callable filters in FAISS (#16190)
- **Description:**
Filtering in a FAISS vectorstores is very inflexible and doesn't allow
that many use case. I think supporting callable like this enables a lot:
regular expressions, condition on multiple keys etc. **Note** I had to
manually alter a test. I don't understand if it was falty to begin with
or if there is something funky going on.
- **Issue:** None
- **Dependencies:** None
- **Twitter handle:** None

Signed-off-by: thiswillbeyourgithub <26625900+thiswillbeyourgithub@users.noreply.github.com>
6 months ago
Yudhajit Sinha 1703fe2361
core[patch]: preserve inspect.iscoroutinefunction with @beta decorator (#16440)
Adjusted deprecate decorator to make sure decorated async functions are
still recognized as "coroutinefunction" by inspect

Addresses #16402

<!-- Thank you for contributing to LangChain!

Please title your PR "<package>: <description>", where <package> is
whichever of langchain, community, core, experimental, etc. is being
modified.

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes if applicable,
  - **Dependencies:** any dependencies required for this change,
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` from the root
of the package you've modified to check this locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc: https://python.langchain.com/docs/contributing/

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
6 months ago
Killinsun - Ryota Takeuchi 52f4ad8216
community: Add new fields in metadata for qdrant vector store (#16608)
## Description

The PR is to return the ID and collection name from qdrant client to
metadata field in `Document` class.

## Issue

The motivation is almost same to
[11592](https://github.com/langchain-ai/langchain/issues/11592)

Returning ID is useful to update existing records in a vector store, but
we cannot know them if we use some retrievers.

In order to avoid any conflicts, breaking changes, the new fields in
metadata have a prefix `_`

## Dependencies

N/A

## Twitter handle

@kill_in_sun

<!-- Thank you for contributing to LangChain!

Please title your PR "<package>: <description>", where <package> is
whichever of langchain, community, core, experimental, etc. is being
modified.

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes if applicable,
  - **Dependencies:** any dependencies required for this change,
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` from the root
of the package you've modified to check this locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc: https://python.langchain.com/docs/contributing/

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->
6 months ago
hulitaitai 32cad38ec6
<langchain_community\llms\chatglm.py>: <Correcting "history"> (#16729)
Use the real "history" provided by the original program instead of
putting "None" in the history.

- **Description:** I change one line in the code to make it return the
"history" of the chat model.
- **Issue:** At the moment it returns only the answers of the chat
model. However the chat model himself provides a history more complet
with the questions of the user.
  - **Dependencies:** no dependencies required for this change,
6 months ago
Bassem Yacoube 85e93e05ed
community[minor]: Update OctoAI LLM, Embedding and documentation (#16710)
This PR includes updates for OctoAI integrations:
- The LLM class was updated to fix a bug that occurs with multiple
sequential calls
- The Embedding class was updated to support the new GTE-Large endpoint
released on OctoAI lately
- The documentation jupyter notebook was updated to reflect using the
new LLM sdk
Thank you!
6 months ago
Shay Ben Elazar 84ebfb5b9d
openai[patch]: Added annotations support to azure openai (#13704)
- **Description:** Added Azure OpenAI Annotations (content filtering
results) to ChatResult

  - **Issue:** 13090

  - **Twitter handle:** ElazarShay

Co-authored-by: Bagatur <baskaryan@gmail.com>
6 months ago
Volodymyr Machula 32c5be8b73
community[minor]: Connery Tool and Toolkit (#14506)
## Summary

This PR implements the "Connery Action Tool" and "Connery Toolkit".
Using them, you can integrate Connery actions into your LangChain agents
and chains.

Connery is an open-source plugin infrastructure for AI.

With Connery, you can easily create a custom plugin with a set of
actions and seamlessly integrate them into your LangChain agents and
chains. Connery will handle the rest: runtime, authorization, secret
management, access management, audit logs, and other vital features.
Additionally, Connery and our community offer a wide range of
ready-to-use open-source plugins for your convenience.

Learn more about Connery:

- GitHub: https://github.com/connery-io/connery-platform
- Documentation: https://docs.connery.io
- Twitter: https://twitter.com/connery_io

## TODOs

- [x] API wrapper
   - [x] Integration tests
- [x] Connery Action Tool
   - [x] Docs
   - [x] Example
   - [x] Integration tests
- [x] Connery Toolkit
  - [x] Docs
  - [x] Example
- [x] Formatting (`make format`)
- [x] Linting (`make lint`)
- [x] Testing (`make test`)
6 months ago
Harrison Chase 8457c31c04
community[patch]: activeloop ai tql deprecation (#14634)
Co-authored-by: AdkSarsen <adilkhan@activeloop.ai>
6 months ago
Neli Hateva c95facc293
langchain[minor], community[minor]: Implement Ontotext GraphDB QA Chain (#16019)
- **Description:** Implement Ontotext GraphDB QA Chain
  - **Issue:** N/A
  - **Dependencies:** N/A
  - **Twitter handle:** @OntotextGraphDB
6 months ago
chyroc a08f9a7ff9
langchain[patch]: support OpenAIAssistantRunnable async (#15302)
fix https://github.com/langchain-ai/langchain/issues/15299

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
6 months ago
Elliot 39eb00d304
community[patch]: Adapt more parameters related to MemorySearchPayload for the search method of ZepChatMessageHistory (#15441)
- **Description:** To adapt more parameters related to
MemorySearchPayload for the search method of ZepChatMessageHistory,
  - **Issue:** None,
  - **Dependencies:** None,
  - **Twitter handle:** None
6 months ago
Jael Gu a1aa3a657c
community[patch]: Milvus supports add & delete texts by ids (#16256)
# Description

To support [langchain
indexing](https://python.langchain.com/docs/modules/data_connection/indexing)
as requested by users, vectorstore Milvus needs to support:
- document addition by id (`add_documents` method with `ids` argument)
- delete by id (`delete` method with `ids` argument)

Example usage:

```python
from langchain.indexes import SQLRecordManager, index
from langchain.schema import Document
from langchain_community.vectorstores import Milvus
from langchain_openai import OpenAIEmbeddings

collection_name = "test_index"
embedding = OpenAIEmbeddings()
vectorstore = Milvus(embedding_function=embedding, collection_name=collection_name)

namespace = f"milvus/{collection_name}"
record_manager = SQLRecordManager(
    namespace, db_url="sqlite:///record_manager_cache.sql"
)
record_manager.create_schema()

doc1 = Document(page_content="kitty", metadata={"source": "kitty.txt"})
doc2 = Document(page_content="doggy", metadata={"source": "doggy.txt"})

index(
    [doc1, doc1, doc2],
    record_manager,
    vectorstore,
    cleanup="incremental",  # None, "incremental", or "full"
    source_id_key="source",
)
```

# Fix issues

Fix https://github.com/milvus-io/milvus/issues/30112

---------

Signed-off-by: Jael Gu <mengjia.gu@zilliz.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
6 months ago
Michard Hugo e9d3527b79
community[patch]: Add missing async similarity_distance_threshold handling in RedisVectorStoreRetriever (#16359)
Add missing async similarity_distance_threshold handling in
RedisVectorStoreRetriever

- **Description:** added method `_aget_relevant_documents` to
`RedisVectorStoreRetriever` that overrides parent method to add support
of `similarity_distance_threshold` in async mode (as for sync mode)
  - **Issue:** #16099
  - **Dependencies:** N/A
  - **Twitter handle:** N/A
6 months ago
Bagatur 7237dc67d4
core[patch]: Release 0.1.17 (#16737) 6 months ago
Anthony Bernabeu 2db79ab111
community[patch]: Implement TTL for DynamoDBChatMessageHistory (#15478)
- **Description:** Implement TTL for DynamoDBChatMessageHistory, 
  - **Issue:** see #15477,
  - **Dependencies:** N/A,

---------

Co-authored-by: Piyush Jain <piyushjain@duck.com>
6 months ago
Massimiliano Pronesti 1bc8d9a943
experimental[patch]: missing resolution strategy in anonymization (#16653)
- **Description:** Presidio-based anonymizers are not working because
`_remove_conflicts_and_get_text_manipulation_data` was being called
without a conflict resolution strategy. This PR fixes this issue. In
addition, it removes some mutable default arguments (antipattern).
 
To reproduce the issue, just run the very first cell of this
[notebook](https://python.langchain.com/docs/guides/privacy/2/) from
langchain's documentation.

<!-- Thank you for contributing to LangChain!

Please title your PR "<package>: <description>", where <package> is
whichever of langchain, community, core, experimental, etc. is being
modified.

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes if applicable,
  - **Dependencies:** any dependencies required for this change,
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` from the root
of the package you've modified to check this locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc: https://python.langchain.com/docs/contributing/

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->
6 months ago
taimo d3d9244fee
langchain-community: fix unicode escaping issue with SlackToolkit (#16616)
- **Description:** fix unicode escaping issue with SlackToolkit
  - **Issue:**  #16610
6 months ago
Benito Geordie f3fdc5c5da
community: Added integrations for ThirdAI's NeuralDB with Retriever and VectorStore frameworks (#15280)
**Description:** Adds ThirdAI NeuralDB retriever and vectorstore
integration. NeuralDB is a CPU-friendly and fine-tunable text retrieval
engine.
6 months ago
Pashva Mehta 22d90800c8
community: Fixed schema discrepancy in from_texts function for weaviate vectorstore (#16693)
* Description: Fixed schema discrepancy in **from_texts** function for
weaviate vectorstore which created a redundant property "key" inside a
class.
* Issue: Fixed: https://github.com/langchain-ai/langchain/issues/16692
* Twitter handle: @pashvamehta1
6 months ago
ccurme ec0ae23645
core: expand docstring for RunnableGenerator (#16672)
- **Description:** expand docstring for RunnableGenerator
  - **Issue:** https://github.com/langchain-ai/langchain/issues/16631
6 months ago
Daniel Erenrich 0600998f38
community: Wikidata tool support (#16691)
- **Description:** Adds Wikidata support to langchain. Can read out
documents from Wikidata.
  - **Issue:** N/A
- **Dependencies:** Adds implicit dependencies for
`wikibase-rest-api-client` (for turning items into docs) and
`mediawikiapi` (for hitting the search endpoint)
  - **Twitter handle:** @derenrich

You can see an example of this tool used in a chain
[here](https://nbviewer.org/urls/d.erenrich.net/upload/Wikidata_Langchain.ipynb)
or
[here](https://nbviewer.org/urls/d.erenrich.net/upload/Wikidata_Lars_Kai_Hansen.ipynb)

<!-- Thank you for contributing to LangChain!


Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` from the root
of the package you've modified to check this locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc: https://python.langchain.com/docs/contributing/

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->
6 months ago
Tze Min 6ef718c5f4
Core: fix Anthropic json issue in streaming (#16670)
**Description:** fix ChatAnthropic json issue in streaming 
**Issue:** https://github.com/langchain-ai/langchain/issues/16423
**Dependencies:** n/a

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
6 months ago
Christophe Bornet 2e3af04080
Use Postponed Evaluation of Annotations in Astra and Cassandra doc loaders (#16694)
Minor/cosmetic change
6 months ago
Erick Friis 88e3129587
robocorp: release 0.0.2 (#16706) 6 months ago
Christophe Bornet 36e432672a
community[minor]: Add async methods to AstraDBLoader (#16652) 6 months ago
William FH 38425c99d2
core[minor]: Image prompt template (#14263)
Builds on Bagatur's (#13227). See unit test for example usage (below)

```python
def test_chat_tmpl_from_messages_multipart_image() -> None:
    base64_image = "abcd123"
    other_base64_image = "abcd123"
    template = ChatPromptTemplate.from_messages(
        [
            ("system", "You are an AI assistant named {name}."),
            (
                "human",
                [
                    {"type": "text", "text": "What's in this image?"},
                    # OAI supports all these structures today
                    {
                        "type": "image_url",
                        "image_url": "data:image/jpeg;base64,{my_image}",
                    },
                    {
                        "type": "image_url",
                        "image_url": {"url": "data:image/jpeg;base64,{my_image}"},
                    },
                    {"type": "image_url", "image_url": "{my_other_image}"},
                    {
                        "type": "image_url",
                        "image_url": {"url": "{my_other_image}", "detail": "medium"},
                    },
                    {
                        "type": "image_url",
                        "image_url": {"url": "https://www.langchain.com/image.png"},
                    },
                    {
                        "type": "image_url",
                        "image_url": {"url": "data:image/jpeg;base64,foobar"},
                    },
                ],
            ),
        ]
    )
    messages = template.format_messages(
        name="R2D2", my_image=base64_image, my_other_image=other_base64_image
    )
    expected = [
        SystemMessage(content="You are an AI assistant named R2D2."),
        HumanMessage(
            content=[
                {"type": "text", "text": "What's in this image?"},
                {
                    "type": "image_url",
                    "image_url": {"url": f"data:image/jpeg;base64,{base64_image}"},
                },
                {
                    "type": "image_url",
                    "image_url": {
                        "url": f"data:image/jpeg;base64,{other_base64_image}"
                    },
                },
                {
                    "type": "image_url",
                    "image_url": {"url": f"{other_base64_image}"},
                },
                {
                    "type": "image_url",
                    "image_url": {
                        "url": f"{other_base64_image}",
                        "detail": "medium",
                    },
                },
                {
                    "type": "image_url",
                    "image_url": {"url": "https://www.langchain.com/image.png"},
                },
                {
                    "type": "image_url",
                    "image_url": {"url": "data:image/jpeg;base64,foobar"},
                },
            ]
        ),
    ]
    assert messages == expected
```

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Brace Sproul <braceasproul@gmail.com>
6 months ago
Rashedul Hasan Rijul 481493dbce
community[patch]: apply embedding functions during query if defined (#16646)
**Description:** This update ensures that the user-defined embedding
function specified during vector store creation is applied during
queries. Previously, even if a custom embedding function was defined at
the time of store creation, Bagel DB would default to using the standard
embedding function during query execution. This pull request addresses
this issue by consistently using the user-defined embedding function for
queries if one has been specified earlier.
6 months ago
Serena Ruan f01fb47597
community[patch]: MLflowCallbackHandler -- Move textstat and spacy as optional dependency (#16657)
Signed-off-by: Serena Ruan <serena.rxy@gmail.com>
6 months ago
Zhuoyun(John) Xu 508bde7f40
community[patch]: Ollama - Pass headers to post request in async method (#16660)
# Description
A previous PR (https://github.com/langchain-ai/langchain/pull/15881)
added option to pass headers to ollama endpoint, but headers are not
pass to the async method.
6 months ago
João Carlos Ferra de Almeida 3e87b67a3c
community[patch]: Add Cookie Support to Fetch Method (#16673)
- **Description:** This change allows the `_fetch` method in the
`WebBaseLoader` class to utilize cookies from an existing
`requests.Session`. It ensures that when the `fetch` method is used, any
cookies in the provided session are included in the request. This
enhancement maintains compatibility with existing functionality while
extending the utility of the `fetch` method for scenarios where cookie
persistence is necessary.
- **Issue:** Not applicable (new feature),
- **Dependencies:** Requires `aiohttp` and `requests` libraries (no new
dependencies introduced),
- **Twitter handle:** N/A

Co-authored-by: Joao Almeida <joao.almeida@mercedes-benz.io>
6 months ago
Harrison Chase 27665e3546
[community] fix anthropic streaming (#16682) 6 months ago
Christophe Bornet 4915c3cd86
[Fix] Fix Cassandra Document loader default page content mapper (#16273)
We can't use `json.dumps` by default as many types returned by the
cassandra driver are not serializable. It's safer to use `str` and let
users define their own custom `page_content_mapper` if needed.
6 months ago
Nuno Campos e86fd946c8
In stream_event and stream_log handle closed streams (#16661)
if eg. the stream iterator is interrupted then adding more events to the
send_stream will raise an exception that we should catch (and handle
where appropriate)

<!-- Thank you for contributing to LangChain!

Please title your PR "<package>: <description>", where <package> is
whichever of langchain, community, core, experimental, etc. is being
modified.

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes if applicable,
  - **Dependencies:** any dependencies required for this change,
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` from the root
of the package you've modified to check this locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc: https://python.langchain.com/docs/contributing/

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->
6 months ago
Nuno Campos 52ccae3fb1
Accept message-like things in Chat models, LLMs and MessagesPlaceholder (#16418) 6 months ago
Pasha 4e189cd89a
community[patch]: youtube loader transcript format (#16625)
- **Description**: YoutubeLoader right now returns one document that
contains the entire transcript. I think it would be useful to add an
option to return multiple documents, where each document would contain
one line of transcript with the start time and duration in the metadata.
For example,
[AssemblyAIAudioTranscriptLoader](https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/document_loaders/assemblyai.py)
is implemented in a similar way, it allows you to choose between the
format to use for the document loader.
6 months ago
yin1991 a936472512
docs: Update documentation to use 'model_id' rather than 'model_name' to match actual API (#16615)
- **Description:** Replace 'model_name' with 'model_id' for accuracy 
- **Issue:**
[link-to-issue](https://github.com/langchain-ai/langchain/issues/16577)
  - **Dependencies:** 
  - **Twitter handle:**
6 months ago
Micah Parker 6543e585a5
community[patch]: Added support for Ollama's num_predict option in ChatOllama (#16633)
Just a simple default addition to the options payload for a ollama
generate call to support a max_new_tokens parameter.

Should fix issue: https://github.com/langchain-ai/langchain/issues/14715
6 months ago
baichuan-assistant 70ff54eace
community[minor]: Add Baichuan Text Embedding Model and Baichuan Inc introduction (#16568)
- **Description:** Adding Baichuan Text Embedding Model and Baichuan Inc
introduction.

Baichuan Text Embedding ranks #1 in C-MTEB leaderboard:
https://huggingface.co/spaces/mteb/leaderboard

Co-authored-by: BaiChuanHelper <wintergyc@WinterGYCs-MacBook-Pro.local>
6 months ago
Bagatur 5b5115c408
google-vertexai[patch]: streaming bug (#16603)
Fixes errors seen here
https://github.com/langchain-ai/langchain/actions/runs/7661680517/job/20881556592#step:9:229
6 months ago
ccurme a989f82027
core: expand docstring for RunnableParallel (#16600)
- **Description:** expand docstring for RunnableParallel
  - **Issue:** https://github.com/langchain-ai/langchain/issues/16462

Feel free to modify this or let me know how it can be improved!
6 months ago
Ghani e30c6662df
Langchain-community : EdenAI chat integration. (#16377)
- **Description:** This PR adds [EdenAI](https://edenai.co/) for the
chat model (already available in LLM & Embeddings). It supports all
[ChatModel] functionality: generate, async generate, stream, astream and
batch. A detailed notebook was added.

  - **Dependencies**: No dependencies are added as we call a rest API.

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
6 months ago
Antonio Lanza 08d3fd7f2e
langchain[patch]: inconsistent results with `RecursiveCharacterTextSplitter`'s `add_start_index=True` (#16583)
This PR fixes issue #16579
6 months ago
Eugene Yurtsev 42db96477f
docs: Update in code documentation for runnable with message history (#16585)
Update the in code documentation for Runnable With Message History
6 months ago
Jatin Chawda a79345f199
community[patch]: Fixed tool names snake_case (#16397)
#16396
Fixed
1. golden_query
2. google_lens
3. memorize
4. merriam_webster
5. open_weather_map
6. pub_med
7. stack_exchange
8. generate_image
9. wikipedia
6 months ago
Bagatur bcc71d1a57
openai[patch]: Release 0.0.5 (#16598) 6 months ago
Bagatur 68f7468754
google-vertexai[patch]: Release 0.0.3 (#16597) 6 months ago
Bagatur 61e876aad8
openai[patch]: Explicitly support embedding dimensions (#16596) 6 months ago
Bagatur 5df8ab574e
infra: move indexing documentation test (#16595) 6 months ago
Bagatur f3d61a6e47
langchain[patch]: Release 0.1.4 (#16592) 6 months ago
Bagatur 61b200947f
community[patch]: Release 0.0.16 (#16591) 6 months ago
Bagatur 75ad0bba2d
openai[patch]: Release 0.0.4 (#16590) 6 months ago
Bagatur 1e3ce338ca
core[patch]: Release 0.1.16 (#16589) 6 months ago
Bagatur 6c89507988
docs: add rag citations page (#16549) 6 months ago
Bagatur 31790d15ec
openai[patch]: accept function_call dict in bind_functions (#16483)
Confusing that you can't pass in a dict
6 months ago
Bagatur ef42d9d559
core[patch], community[patch], openai[patch]: consolidate openai tool… (#16485)
… converters

One way to convert anything to an OAI function:
convert_to_openai_function
One way to convert anything to an OAI tool: convert_to_openai_tool
Corresponding bind functions on OAI models: bind_functions, bind_tools
6 months ago
Brian Burgin 148347e858
community[minor]: Add LiteLLM Router Integration (#15588)
community:

  - **Description:**
- Add new ChatLiteLLMRouter class that allows a client to use a LiteLLM
Router as a LangChain chat model.
- Note: The existing ChatLiteLLM integration did not cover the LiteLLM
Router class.
    - Add tests and Jupyter notebook.
  - **Issue:** None
  - **Dependencies:** Relies on existing ChatLiteLLM integration
  - **Twitter handle:** @bburgin_0

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
6 months ago
JongRok BAEK 3b8eba32f9
anthropic[patch]: Fix message type lookup in Anthropic Partners (#16563)
- **Description:** 

The parameters for user and assistant in Anthropic should be 'ai ->
assistant,' but they are reversed to 'assistant -> ai.'
Below is error code.
```python
anthropic.BadRequestError: Error code: 400 - {'type': 'error', 'error': {'type': 'invalid_request_error', 'message': 'messages: Unexpected role "ai". Allowed roles are "user" or "assistant"'}}
```

[anthropic](7177f3a71f/src/anthropic/types/beta/message_param.py (L13))

  - **Issue:** : #16561
  -  **Dependencies:** : None
   - **Twitter handle:** : None
6 months ago
Dmitry Tyumentsev e86e66bad7
community[patch]: YandexGPT models - add sleep_interval (#16566)
Added sleep between requests to prevent errors associated with
simultaneous requests.
6 months ago
Bagatur e510cfaa23
core[patch]: passthrough BaseRetriever.invoke(**kwargs) (#16551)
Fix for #16547
6 months ago
Anders Åhsman 355ef2a4a6
langchain[patch]: Fix doc-string grammar (#16543)
- **Description:** Small grammar fix in docstring for class
`BaseCombineDocumentsChain`.
6 months ago
Aditya 9dd7cbb447
google-genai: added logic for method get_num_tokens() (#16205)
<!-- Thank you for contributing to LangChain!

Please title your PR "partners: google-genai",

Replace this entire comment with:
- **Description:** : added logic for method get_num_tokens() for
ChatGoogleGenerativeAI , GoogleGenerativeAI,
  - **Issue:** : https://github.com/langchain-ai/langchain/issues/16204,
  - **Dependencies:** : None,
  - **Twitter handle:** @Aditya_Rane

---------

Co-authored-by: adityarane@google.com <adityarane@google.com>
Co-authored-by: Leonid Kuligin <lkuligin@yandex.ru>
6 months ago
James Braza 0785432e7b
langchain-google-vertexai: perserving grounding metadata (#16309)
Revival of https://github.com/langchain-ai/langchain/pull/14549 that
closes https://github.com/langchain-ai/langchain/issues/14548.
6 months ago
Erick Friis adc008407e
exa: init pkg (#16553) 6 months ago
Rave Harpaz c4e9c9ca29
community[minor]: Add OCI Generative AI integration (#16548)
<!-- Thank you for contributing to LangChain!

Please title your PR "<package>: <description>", where <package> is
whichever of langchain, community, core, experimental, etc. is being
modified.

Replace this entire comment with:
- **Description:** Adding Oracle Cloud Infrastructure Generative AI
integration. Oracle Cloud Infrastructure (OCI) Generative AI is a fully
managed service that provides a set of state-of-the-art, customizable
large language models (LLMs) that cover a wide range of use cases, and
which is available through a single API. Using the OCI Generative AI
service you can access ready-to-use pretrained models, or create and
host your own fine-tuned custom models based on your own data on
dedicated AI clusters.
https://docs.oracle.com/en-us/iaas/Content/generative-ai/home.htm
  - **Issue:** None,
  - **Dependencies:** OCI Python SDK,
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` from the root
of the package you've modified to check this locally.
Passed

See contribution guidelines for more information on how to write/run
tests, lint, etc: https://python.langchain.com/docs/contributing/

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.

we provide unit tests. However, we cannot provide integration tests due
to Oracle policies that prohibit public sharing of api keys.
 
If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->

---------

Co-authored-by: Arthur Cheng <arthur.cheng@oracle.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
6 months ago
Bagatur c173a69908
langchain[patch]: oai tools output parser nit (#16540)
allow positional init args
6 months ago
arnob-sengupta f9976b9630
core[patch]: consolidate conditional in BaseTool (#16530)
- **Description:** Refactor contradictory conditional to single line
  - **Issue:** #16528
6 months ago
Bagatur 5c2538b9f7
anthropic[patch]: allow pop by field name (#16544)
allow `ChatAnthropicMessages(model=...)`
6 months ago
Harel Gal a91181fe6d
community[minor]: add support for Guardrails for Amazon Bedrock (#15099)
Added support for optionally supplying 'Guardrails for Amazon Bedrock'
on both types of model invocations (batch/regular and streaming) and for
all models supported by the Amazon Bedrock service.

@baskaryan  @hwchase17

```python 
llm = Bedrock(model_id="<model_id>", client=bedrock,
                  model_kwargs={},
                  guardrails={"id": " <guardrail_id>",
                              "version": "<guardrail_version>",
                               "trace": True}, callbacks=[BedrockAsyncCallbackHandler()])

class BedrockAsyncCallbackHandler(AsyncCallbackHandler):
    """Async callback handler that can be used to handle callbacks from langchain."""

    async def on_llm_error(
            self,
            error: BaseException,
            **kwargs: Any,
    ) -> Any:
        reason = kwargs.get("reason")
        if reason == "GUARDRAIL_INTERVENED":
           # kwargs contains additional trace information sent by 'Guardrails for Bedrock' service.
            print(f"""Guardrails: {kwargs}""")


# streaming 
llm = Bedrock(model_id="<model_id>", client=bedrock,
                  model_kwargs={},
                  streaming=True,
                  guardrails={"id": "<guardrail_id>",
                              "version": "<guardrail_version>"})
```

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
6 months ago
Martin Kolb 04651f0248
community[minor]: VectorStore integration for SAP HANA Cloud Vector Engine (#16514)
- **Description:**
This PR adds a VectorStore integration for SAP HANA Cloud Vector Engine,
which is an upcoming feature in the SAP HANA Cloud database
(https://blogs.sap.com/2023/11/02/sap-hana-clouds-vector-engine-announcement/).

  - **Issue:** N/A
- **Dependencies:** [SAP HANA Python
Client](https://pypi.org/project/hdbcli/)
  - **Twitter handle:** @sapopensource

Implementation of the integration:
`libs/community/langchain_community/vectorstores/hanavector.py`

Unit tests:
`libs/community/tests/unit_tests/vectorstores/test_hanavector.py`

Integration tests:
`libs/community/tests/integration_tests/vectorstores/test_hanavector.py`

Example notebook:
`docs/docs/integrations/vectorstores/hanavector.ipynb`

Access credentials for execution of the integration tests can be
provided to the maintainers.

---------

Co-authored-by: sascha <sascha.stoll@sap.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
6 months ago
Leonid Kuligin 1113700b09
google-genai[patch]: better error message when location is not supported (#16535)
Replace this entire comment with:
- **Description:** a better error message when location is not supported
6 months ago
Unai Garay Maestre fdbfa6b2c8
Adds progress bar to VertexAIEmbeddings (#14542)
- **Description:** Adds progress bar to VertexAIEmbeddings 
- **Issue:** related issue
https://github.com/langchain-ai/langchain/issues/13637

Signed-off-by: ugm2 <unaigaraymaestre@gmail.com>

---------

Signed-off-by: ugm2 <unaigaraymaestre@gmail.com>
6 months ago
James Braza 643fb3ab50
langchain-google-vertexai[patch]: more verbose mypy config (#16307)
Flushing out the `mypy` config in `langchain-google-vertexai` to show
error codes and other warnings

This PR also bumps `mypy` to above version 1's stable release
6 months ago
Jeremi Joslin 9e95699277
community[patch]: Fix error message when litellm is not installed (#16316)
The error message was mentioning the wrong package. I updated it to the
correct one.
6 months ago
bachr b3ed98dec0
community[patch]: avoid KeyError when language not in LANGUAGE_SEGMENTERS (#15212)
**Description:**

Handle unsupported languages in same way as when none is provided 
 
**Issue:**

The following line will throw a KeyError if the language is not
supported.
```python
self.Segmenter = LANGUAGE_SEGMENTERS[language]
```
E.g. when using `Language.CPP` we would get `KeyError: <Language.CPP:
'cpp'>`

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
6 months ago
Nuno Campos 3f38e1a457
Remove double line (#16426)
<!-- Thank you for contributing to LangChain!

Please title your PR "<package>: <description>", where <package> is
whichever of langchain, community, core, experimental, etc. is being
modified.

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes if applicable,
  - **Dependencies:** any dependencies required for this change,
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` from the root
of the package you've modified to check this locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc: https://python.langchain.com/docs/contributing/

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->
6 months ago
chyroc 61da2ff24c
community[patch]: use SecretStr for yandex model secrets (#15463) 6 months ago
Alessio Serra d628a80a5d
community[patch]: added 'conversational' as a valid task for hugginface endopoint models (#15761)
- **Description:** added the conversational task to hugginFace endpoint
in order to use models designed for chatbot programming.
  - **Dependencies:** None

---------

Co-authored-by: Alessio Serra (ext.) <alessio.serra@partner.bmw.de>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
6 months ago
Karim Lalani 4c7755778d
community[patch]: SurrealDB fix for asyncio (#16092)
Code fix for asyncio
6 months ago
Raunak 476bf8b763
community[patch]: Load list of files using UnstructuredFileLoader (#16216)
- **Description:** Updated `_get_elements()` function of
`UnstructuredFileLoader `class to check if the argument self.file_path
is a file or list of files. If it is a list of files then it iterates
over the list of file paths, calls the partition function for each one,
and appends the results to the elements list. If self.file_path is not a
list, it calls the partition function as before.
  
  - **Issue:** Fixed #15607,
  - **Dependencies:** NA
  - **Twitter handle:** NA

Co-authored-by: H161961 <Raunak.Raunak@Honeywell.com>
6 months ago
Xudong Sun 019b6ebe8d
community[minor]: Add iFlyTek Spark LLM chat model support (#13389)
- **Description:** This PR enables LangChain to access the iFlyTek's
Spark LLM via the chat_models wrapper.
  - **Dependencies:** websocket-client ^1.6.1
  - **Tag maintainer:** @baskaryan 

### SparkLLM chat model usage

Get SparkLLM's app_id, api_key and api_secret from [iFlyTek SparkLLM API
Console](https://console.xfyun.cn/services/bm3) (for more info, see
[iFlyTek SparkLLM Intro](https://xinghuo.xfyun.cn/sparkapi) ), then set
environment variables `IFLYTEK_SPARK_APP_ID`, `IFLYTEK_SPARK_API_KEY`
and `IFLYTEK_SPARK_API_SECRET` or pass parameters when using it like the
demo below:

```python3
from langchain.chat_models.sparkllm import ChatSparkLLM

client = ChatSparkLLM(
    spark_app_id="<app_id>",
    spark_api_key="<api_key>",
    spark_api_secret="<api_secret>"
)
```
6 months ago
Ali Zendegani 80fcc50c65
langchain[patch]: Minor Fix: Enable Passing custom_headers for Authentication in GraphQL Agent/Tool (#16413)
- **Description:** 

This PR aims to enhance the `langchain` library by enabling the support
for passing `custom_headers` in the `GraphQLAPIWrapper` usage within
`langchain/agents/load_tools.py`.

While the `GraphQLAPIWrapper` from the `langchain_community` module is
inherently capable of handling `custom_headers`, its current invocation
in `load_tools.py` does not facilitate this functionality.
This limitation restricts the use of the `graphql` tool with databases
or APIs that require token-based authentication.

The absence of support for `custom_headers` in this context also leads
to a lack of error messages when attempting to interact with secured
GraphQL endpoints, making debugging and troubleshooting more
challenging.

This update modifies the `load_tools` function to correctly handle
`custom_headers`, thereby allowing secure and authenticated access to
GraphQL services requiring tokens.

Example usage after the proposed change:
```python
tools = load_tools(
    ["graphql"],
    graphql_endpoint="https://your-graphql-endpoint.com/graphql",
    custom_headers={"Authorization": f"Token {api_token}"},
)
```
  - **Issue:** None,
  - **Dependencies:** None,
  - **Twitter handle:** None
6 months ago
Serena Ruan 5c6e123757
community[patch]: Fix MlflowCallback with none artifacts_dir (#16487) 6 months ago
Krista Pratico 0e2e7d8b83
langchain[patch]: allow passing client with OpenAIAssistantRunnable (#16486)
- **Description:** This addresses the issue tagged below where if you
try to pass your own client when creating an OpenAI assistant, a
pydantic error is raised:

Example code:

```python
import openai
from langchain.agents.openai_assistant import OpenAIAssistantRunnable

client = openai.OpenAI()
interpreter_assistant = OpenAIAssistantRunnable.create_assistant(
    name="langchain assistant",
    instructions="You are a personal math tutor. Write and run code to answer math questions.",
    tools=[{"type": "code_interpreter"}],
    model="gpt-4-1106-preview",
    client=client
)

```

Error:
`pydantic.v1.errors.ConfigError: field "client" not yet prepared, so the
type is still a ForwardRef. You might need to call
OpenAIAssistantRunnable.update_forward_refs()`

It additionally updates type hints and docstrings to indicate that an
AzureOpenAI client is permissible as well.

  - **Issue:** https://github.com/langchain-ai/langchain/issues/15948
  - **Dependencies:** N/A
6 months ago
bu2kx ff3163297b
community[minor]: Add KDBAI vector store (#12797)
Addition of KDBAI vector store (https://kdb.ai).

Dependencies: `kdbai_client` v0.1.2 Python package.

Sample notebook: `docs/docs/integrations/vectorstores/kdbai.ipynb`

Tag maintainer: @bu2kx
Twitter handle: @kxsystems
6 months ago
Shivani Modi 4e160540ff
community[minor]: Adding Konko Completion endpoint (#15570)
This PR introduces update to Konko Integration with LangChain.

1. **New Endpoint Addition**: Integration of a new endpoint to utilize
completion models hosted on Konko.

2. **Chat Model Updates for Backward Compatibility**: We have updated
the chat models to ensure backward compatibility with previous OpenAI
versions.

4. **Updated Documentation**: Comprehensive documentation has been
updated to reflect these new changes, providing clear guidance on
utilizing the new features and ensuring seamless integration.

Thank you to the LangChain team for their exceptional work and for
considering this PR. Please let me know if any additional information is
needed.

---------

Co-authored-by: Shivani Modi <shivanimodi@Shivanis-MacBook-Pro.local>
Co-authored-by: Shivani Modi <shivanimodi@Shivanis-MBP.lan>
6 months ago
Gianfranco Demarco c69f599594
langchain[patch]: Extract _aperform_agent_action from _aiter_next_step from AgentExecutor (#15707)
- **Description:** extreact the _aperform_agent_action in the
AgentExecutor class to allow for easier overriding. Extracted logic from
_iter_next_step into a new method _perform_agent_action for consistency
and easier overriding.
- **Issue:** #15706

Closes #15706
6 months ago
i-w-a 95ee69a301
langchain[patch]: In HTMLHeaderTextSplitter set default encoding to utf-8 (#16372)
- **Description:** The HTMLHeaderTextSplitter Class now explicitly
specifies utf-8 encoding in the part of the split_text_from_file method
that calls the HTMLParser.
- **Issue:** Prevent garbled characters due to differences in encoding
of html files (except for English in particular, I noticed that problem
with Japanese).
  - **Dependencies:** No dependencies,
  - **Twitter handle:**  @i_w__a
6 months ago
Noah Stapp e135e5257c
community[patch]: Include scores in MongoDB Atlas QA chain results (#14666)
Adds the ability to return similarity scores when using
`RetrievalQA.from_chain_type` with `MongoDBAtlasVectorSearch`. Requires
that `return_source_documents=True` is set.

Example use:

```
vector_search = MongoDBAtlasVectorSearch.from_documents(...)

qa = RetrievalQA.from_chain_type(
	llm=OpenAI(), 
	chain_type="stuff", 
	retriever=vector_search.as_retriever(search_kwargs={"additional": ["similarity_score"]}),
	return_source_documents=True
)

...

docs = qa({"query": "..."})

docs["source_documents"][0].metadata["score"] # score will be here
```

I've tested this feature locally, using a MongoDB Atlas Cluster with a
vector search index.
6 months ago
Serena Ruan 90f5a1c40e
community[minor]: Improve mlflow callback (#15691)
- **Description:** Allow passing run_id to MLflowCallbackHandler to
resume a run instead of creating a new run. Support recording retriever
relevant metrics. Refactor the code to fix some bugs.
---------

Signed-off-by: Serena Ruan <serena.rxy@gmail.com>
6 months ago
Facundo Santiago 92e6a641fd
feat: adding paygo api support for Azure ML / Azure AI Studio (#14560)
- **Description:** Introducing support for LLMs and Chat models running
in Azure AI studio and Azure ML using the new deployment mode
pay-as-you-go (model as a service).
- **Issue:** NA
- **Dependencies:** None.
- **Tag maintainer:** @prakharg-msft @gdyre 
- **Twitter handle:** @santiagofacundo

Examples added:
*
[docs/docs/integrations/llms/azure_ml.ipynb](https://github.com/santiagxf/langchain/blob/santiagxf/azureml-endpoints-paygo-community/docs/docs/integrations/chat/azureml_endpoint.ipynb)
*
[docs/docs/integrations/chat/azureml_chat_endpoint.ipynb](https://github.com/santiagxf/langchain/blob/santiagxf/azureml-endpoints-paygo-community/docs/docs/integrations/chat/azureml_chat_endpoint.ipynb)

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
6 months ago
Davide Menini 9ce177580a
community: normalize bedrock embeddings (#15103)
In this PR I added a post-processing function to normalize the
embeddings. This happens only if the new `normalize` flag is `True`.

---------

Co-authored-by: taamedag <Davide.Menini@swisscom.com>
6 months ago
baichuan-assistant 20fcd49348
community: Fix Baichuan Chat. (#15207)
- **Description:** Baichuan Chat (with both Baichuan-Turbo and
Baichuan-Turbo-192K models) has updated their APIs. There are breaking
changes. For example, BAICHUAN_SECRET_KEY is removed in the latest API
but is still required in Langchain. Baichuan's Langchain integration
needs to be updated to the latest version.
  - **Issue:** #15206
  - **Dependencies:** None,
  - **Twitter handle:** None

@hwchase17.

Co-authored-by: BaiChuanHelper <wintergyc@WinterGYCs-MacBook-Pro.local>
6 months ago
gcheron cfc225ecb3
community: SQLStrStore/SQLDocStore provide an easy SQL alternative to `InMemoryStore` to persist data remotely in a SQL storage (#15909)
**Description:**

- Implement `SQLStrStore` and `SQLDocStore` classes that inherits from
`BaseStore` to allow to persist data remotely on a SQL server.
- SQL is widely used and sometimes we do not want to install a caching
solution like Redis.
- Multiple issues/comments complain that there is no easy remote and
persistent solution that are not in memory (users want to replace
InMemoryStore), e.g.,
https://github.com/langchain-ai/langchain/issues/14267,
https://github.com/langchain-ai/langchain/issues/15633,
https://github.com/langchain-ai/langchain/issues/14643,
https://stackoverflow.com/questions/77385587/persist-parentdocumentretriever-of-langchain
- This is particularly painful when wanting to use
`ParentDocumentRetriever `
- This implementation is particularly useful when:
     * it's expensive to construct an InMemoryDocstore/dict
     * you want to retrieve documents from remote sources
     * you just want to reuse existing objects
- This implementation integrates well with PGVector, indeed, when using
PGVector, you already have a SQL instance running. `SQLDocStore` is a
convenient way of using this instance to store documents associated to
vectors. An integration example with ParentDocumentRetriever and
PGVector is provided in docs/docs/integrations/stores/sql.ipynb or
[here](https://github.com/gcheron/langchain/blob/sql-store/docs/docs/integrations/stores/sql.ipynb).
- It persists `str` and `Document` objects but can be easily extended.

 **Issue:**

Provide an easy SQL alternative to `InMemoryStore`.

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
6 months ago
Massimiliano Pronesti e529939c54
feat(llms): support more tasks in HuggingFaceHub LLM and remove deprecated dep (#14406)
- **Description:** this PR upgrades the `HuggingFaceHub` LLM:
   * support more tasks (`translation` and `conversational`)
   * replaced the deprecated `InferenceApi` with `InferenceClient`
* adjusted the overall logic to use the "recommended" model for each
task when no model is provided, and vice-versa.
- **Tag mainter(s)**: @baskaryan @hwchase17
6 months ago
Erick Friis afb25eeec4
cli[patch]: add integration tests to default makefile (#16479) 6 months ago
Bagatur ba326b98d0
langchain[patch]: Release 0.1.3 (#16475) 6 months ago
Bagatur 54149292f8
community[patch]: Release 0.0.15 (#16474) 6 months ago
Bagatur ef6a335570
core[patch]: Release 0.1.15 (#16473) 6 months ago
Erick Friis 1f4ac62dee
cli[patch], google-vertexai[patch]: readme template (#16470) 6 months ago
Tomaz Bratanic d0a8082188
Fix neo4j sanitize (#16439)
Fix the sanitization bug and add an integration test
6 months ago
William FH 5de59f9236
Core[Patch] Parse tool input after on_start (#16430)
For tracing, if a validation error occurs, currently it is attributed to
the previous step of the chain. It would be nice to have the on_start
and on_error callbacks called for tools when there is a validation error
that occurs to more easily attribute the root-cause
6 months ago
Nuno Campos 226fe645f1
core[patch] Do not try to access attribute of None (#16321) 6 months ago
Florian MOREL 4b7969efc5
community[minor]: New documents loader for visio files (with extension .vsdx) (#16171)
**Description** : New documents loader for visio files (with extension
.vsdx)

A [visio file](https://fr.wikipedia.org/wiki/Microsoft_Visio) (with
extension .vsdx) is associated with Microsoft Visio, a diagram creation
software. It stores information about the structure, layout, and
graphical elements of a diagram. This format facilitates the creation
and sharing of visualizations in areas such as business, engineering,
and computer science.

A Visio file can contain multiple pages. Some of them may serve as the
background for others, and this can occur across multiple layers. This
loader extracts the textual content from each page and its associated
pages, enabling the extraction of all visible text from each page,
similar to what an OCR algorithm would do.

**Dependencies** : xmltodict package
6 months ago
Boris Feld 404abf139a
community: Add CometLLM tracing context var (#15765)
I also added LANGCHAIN_COMET_TRACING to enable the CometLLM tracing
integration similar to other tracing integrations. This is easier for
end-users to enable it rather than importing the callback and pass it
manually.

(This is the same content as
https://github.com/langchain-ai/langchain/pull/14650 but rebased and
squashed as something seems to confuse Github Action).
6 months ago
Nicolò Boschi a500527030
infra: google-vertexai relax types-requests deps range (#16264)
- **Description:** At the moment it's not possible to include in the
same project langchain-google-vertexai and boto3 (e.g. use bedrock and
vertex in the same application) because of the dependency resolutions
conflict. boto3 is still using urllib3 1.x, meanwhile
langchain-google-vertexai -> types-requests depends on urllib3 2.x. [the
last version of types-requests that allows urllib3 1.x is
2.31.0.6](https://pypi.org/project/types-requests/#description).
In this PR I allow the vertexai package to get that version also. 
  
- **Twitter handle:** nicoloboschi
6 months ago
DL b9e7f6f38a
community[minor]: Bedrock async methods (#12477)
Description: Added support for asynchronous streaming in the Bedrock
class and corresponding tests.

Primarily:
  async def aprepare_output_stream
    async def _aprepare_input_and_invoke_stream
    async def _astream
    async def _acall

I've ensured that the code adheres to the project's linting and
formatting standards by running make format, make lint, and make test.

Issue: #12054, #11589

Dependencies: None

Tag maintainer: @baskaryan 

Twitter handle: @dominic_lovric

---------

Co-authored-by: Piyush Jain <piyushjain@duck.com>
6 months ago
Frank995 5694728816
community[patch]: Implement vector length definition at init time in PGVector for indexing (#16133)
Replace this entire comment with:
- **Description:** allow user to define tVector length in PGVector when
creating the embedding store, this allows for later indexing
  - **Issue:** #16132
  - **Dependencies:** None
6 months ago
Chase VanSteenburg 1011b681dc
core[patch]: Fix f-string formatting in error message for configurable_fields (#16411)
- **Description:** Simple fix to f-string formatting. Allows more
informative ValueError output.
  - **Issue:** None needed.
  - **Dependencies:** None.
  - **Twitter handle:** @FlightP1an
6 months ago
parkererickson-tg b26a22f307
community[minor]: add TigerGraph support (#16280)
**Description:** Add support for querying TigerGraph databases through
the InquiryAI service.
**Issue**: N/A
**Dependencies:** N/A
**Twitter handle:** @TigerGraphDB
6 months ago
Alireza Kashani d1b4ead87c
community[patch]: Update grobid.py (#16298)
there is a case where "coords" does not exist in the "sentence"
therefore, the "split(";")" will lead to error.

we can fix that by adding "if sentence.get("coords") is not None:" 

the resulting empty "sbboxes" from this scenario will raise error at
"sbboxes[0]["page"]" because sbboxes are empty.

the PDF from https://pubmed.ncbi.nlm.nih.gov/23970373/ can replicate
those errors.
6 months ago
s-g-1 fbe592a5ce
community[patch]: fix typo in pgvecto_rs debug msg (#16318)
fixes typo in pip install message for the pgvecto_rs community vector
store
no issues found mentioning this
no dependents changed
6 months ago
James Braza d511366dd3
infra: absolute `EXAMPLE_DIR` path in core unit tests (#16325)
If you invoked testing from places besides `core/`, this `EXAMPLE_DIR`
path won't work. This PR makes`EXAMPLE_DIR` robust against invocation
location
6 months ago
Ian b9f5104e6c
communty[minor]: Store Message History to TiDB Database (#16304)
This pull request integrates the TiDB database into LangChain for
storing message history, marking one of several steps towards a
comprehensive integration of TiDB with LangChain.


A simple usage
```python
from datetime import datetime
from langchain_community.chat_message_histories import TiDBChatMessageHistory

history = TiDBChatMessageHistory(
    connection_string="mysql+pymysql://<host>:<PASSWORD>@<host>:4000/<db>?ssl_ca=/etc/ssl/cert.pem&ssl_verify_cert=true&ssl_verify_identity=true",
    session_id="code_gen",
    earliest_time=datetime.utcnow(),  # Optional to set earliest_time to load messages after this time point.
)

history.add_user_message("hi! How's feature going?")
history.add_ai_message("It's almot done")
```
6 months ago
Erick Friis 35ec0bbd3b
cli[patch]: pypi fields (#16410) 6 months ago
Erick Friis 2ac3a82d85
cli[patch]: new fields in integration template, release 0.0.21 (#16398) 6 months ago
Erick Friis cfe95ab085
multiple: update langsmith dep (#16407) 6 months ago
Eli Lucherini 6b2a57161a
community[patch]: allow additional kwargs in MlflowEmbeddings for compatibility with Cohere API (#15242)
- **Description:** add support for kwargs in`MlflowEmbeddings`
`embed_document()` and `embed_query()` so that all the arguments
required by Cohere API (and others?) can be passed down to the server.
  - **Issue:** #15234 
- **Dependencies:** MLflow with MLflow Deployments (`pip install
mlflow[genai]`)

**Tests**
Now this code [adapted from the
docs](https://python.langchain.com/docs/integrations/providers/mlflow#embeddings-example)
for the Cohere API works locally.

```python
"""
Setup
-----
export COHERE_API_KEY=...
mlflow deployments start-server --config-path examples/deployments/cohere/config.yaml

Run
---
python /path/to/this/file.py
"""
embeddings = MlflowCohereEmbeddings(target_uri="http://127.0.0.1:5000", endpoint="embeddings")
print(embeddings.embed_query("hello")[:3])
print(embeddings.embed_documents(["hello", "world"])[0][:3])
```

Output
```
[0.060455322, 0.028793335, -0.025848389]
[0.031707764, 0.021057129, -0.009361267]
```
6 months ago
Guillem Orellana Trullols aad2aa7188
community[patch]: BedrockChat -> Support Titan express as chat model (#15408)
Titan Express model was not supported as a chat model because LangChain
messages were not "translated" to a text prompt.

Co-authored-by: Guillem Orellana Trullols <guillem.orellana_trullols@siemens.com>
6 months ago
Piotr Mardziel 1b9001db47
core[patch]: preserve inspect.iscoroutinefunction with @deprecated decorator (#16295)
Adjusted `deprecate` decorator to make sure decorated async functions
are still recognized as "coroutinefunction" by `inspect`.

Before change, functions such as `LLMChain.acall` which are decorated as
deprecated are not recognized as coroutine functions. After the change,
they are recognized:

```python
import inspect
from langchain import LLMChain

# Is false before change but true after.
inspect.iscoroutinefunction(LLMChain.acall)
```
6 months ago
Katarina Supe 01c2f27ffa
community[patch]: Update Memgraph support (#16360)
- **Description:** I removed two queries to the database and left just
one whose results were formatted afterward into other type of schema
(avoided two calls to DB)
  - **Issue:** /
  - **Dependencies:** /
  - **Twitter handle:** @supe_katarina
6 months ago
Max Jakob 8569b8f680
community[patch]: ElasticsearchStore enable max inner product (#16393)
Enable max inner product for approximate retrieval strategy. For exact
strategy we lack the necessary `maxInnerProduct` function in the
Painless scripting language, this is why we do not add it there.

Similarity docs:
https://www.elastic.co/guide/en/elasticsearch/reference/current/dense-vector.html#dense-vector-params

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Joe McElroy <joseph.mcelroy@elastic.co>
6 months ago
Iskren Ivov Chernev fc196cab12
community[minor]: DeepInfra support for chat models (#16380)
Add deepinfra chat models support.

This is https://github.com/langchain-ai/langchain/pull/14234 re-opened
from my branch (so maintainers can edit).
6 months ago
Bagatur 85e8423312
community[patch]: Update bing results tool name (#16395)
Make BingSearchResults tool name OpenAI functions compatible (can't have
spaces).

Fixes #16368
6 months ago
Max Jakob de209af533
community[patch]: ElasticsearchStore: add relevance function selector (#16378)
Implement similarity function selector for ElasticsearchStore. The
scores coming back from Elasticsearch are already similarities (not
distances) and they are already normalized (see
[docs](https://www.elastic.co/guide/en/elasticsearch/reference/current/dense-vector.html#dense-vector-params)).
Hence we leave the scores untouched and just forward them.

This fixes #11539.

However, in hybrid mode (when keyword search and vector search are
involved) Elasticsearch currently returns no scores. This PR adds an
error message around this fact. We need to think a bit more to come up
with a solution for this case.

This PR also corrects a small error in the Elasticsearch integration
test.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
6 months ago
y2noda 54f90fc6bc
langchain_google_vertexai:Enable the use of langchain's built-in tools in Gemini's function calling (#16341)
- **Issue:** This is a PR about #16340 

<!-- Thank you for contributing to LangChain!

Please title your PR "<package>: <description>", where <package> is
whichever of langchain, community, core, experimental, etc. is being
modified.

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes if applicable,
  - **Dependencies:** any dependencies required for this change,
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` from the root
of the package you've modified to check this locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc: https://python.langchain.com/docs/contributing/

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->

Co-authored-by: yuhei.tsunoda <yuhei.tsunoda@brainpad.co.jp>
6 months ago
Tom Jorquera 1445ac95e8
community[patch]: Enable streaming for GPT4all (#16392)
`streaming` param was never passed to model
6 months ago
Bagatur af9f1738ca
langchain[patch]: Release 0.1.2 (#16388) 6 months ago
Bagatur 8779013847
community[patch]: Release 0.0.14 (#16384) 6 months ago
Bagatur 9cf0f5eb78
core[patch]: Release 0.1.14 (#16382) 6 months ago
Bagatur 1dc6c1ce06
core[patch], community[patch], langchain[patch], docs: Update SQL chains/agents/docs (#16168)
Revamp SQL use cases docs. In the process update SQL chains and agents.
6 months ago
Bob Lin acc14802d1
Fix `conn` field definition in SQLiteEntityStore (#15440) 6 months ago
James Braza e1c59779ad
core[patch]: Remove `print` statement on missing `grandalf` dependency in favor of more explicit ImportError (#16326)
After this PR an ImportError will be raised without a print if grandalf
is missing when using grandalf related code for printing runnable
graphs.
6 months ago
Nuno Campos 971a68d04f
Docs: Update README.md in core (#16329)
Docs: Update README.md in core
6 months ago
Eugene Yurtsev 89372fca22
core[patch]: Update sys info information (#16297)
Update information collected in sys info.

python -m langchain_core.sys_info     

System Information
------------------
> OS:  Linux
> OS Version: #14~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Mon Nov 20 18:15:30
UTC 2
> Python Version:  3.11.4 (main, Sep 25 2023, 10:06:23) [GCC 11.4.0]

Package Information
-------------------
> langchain_core: 0.1.10
> langchain: 0.1.0
> langchain_community: 0.0.11
> langchain_cli: 0.0.20
> langchain_experimental: 0.0.36
> langchain_openai: 0.0.2
> langchainhub: 0.1.14
> langserve: 0.0.19

Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:

> langgraph
6 months ago
Luke 5396604ef4
community: Handling missing key in Google Trends API response. (#15864)
- **Description:** Handing response where _interest_over_time_ is
missing.
  - **Issue:** #15859
  - **Dependencies:** None
6 months ago
Virat Singh c2a614eddc
community: Add PolygonLastQuote Tool and Toolkit (#15990)
**Description:** 
In this PR, I am adding a `PolygonLastQuote` Tool, which can be used to
get the latest price quote for a given ticker / stock.

Additionally, I've added a Polygon Toolkit, which we can use to
encapsulate future tools that we build for Polygon.

**Twitter handle:** [@virattt](https://twitter.com/virattt)

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
6 months ago
Nuno Campos ef75bb63ce
core[patch] Fix tracer output of streamed runs with non-addable output (#16324)
- Used to be None, now is just the last chunk

<!-- Thank you for contributing to LangChain!

Please title your PR "<package>: <description>", where <package> is
whichever of langchain, community, core, experimental, etc. is being
modified.

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes if applicable,
  - **Dependencies:** any dependencies required for this change,
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` from the root
of the package you've modified to check this locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc: https://python.langchain.com/docs/contributing/

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->
6 months ago
Ryan French 3d23a5eb36
langchain[patch]: Allow OpenSearch Query Translator to correctly work with Date types (#16022)
**Description:**

Fixes an issue where the Date type in an OpenSearch Self Querying
Retriever would fail to generate a valid query

**Issue:**
https://github.com/langchain-ai/langchain/issues/14225
6 months ago
Ofer Mendelevitch ffae98d371
template: Update Vectara templates (#15363)
fixed multi-query template for Vectara
added self-query template for Vectara

Also added prompt_name parameter to summarization

CC @efriis 
 **Twitter handle:** @ofermend
6 months ago
Bagatur 1e29b676d5
core[patch]: simple fallback streaming (#16055) 6 months ago
Eugene Yurtsev 4ef0ed4ddc
astream_events: Add version parameter while method is in beta (#16290)
Add a version parameter while the method is in beta phase.

The idea is to make it possible to minimize making breaking changes for users while we're iterating on schema.

Once the API is stable we can assign a default version requirement.
6 months ago
Bagatur 91230ef5d1
openai[patch]: Release 0.0.3 (#16289) 6 months ago
Hamza Kyamanywa 39b3c6d94c
langchain[patch]: Add konlpy based text splitting for Korean (#16003)
- **Description:** Adds a text splitter based on
[Konlpy](https://konlpy.org/en/latest/#start) which is a Python package
for natural language processing (NLP) of the Korean language. (It is
like Spacy or NLTK for Korean)
- **Dependencies:** Konlpy would have to be installed before this
splitter is used,
  - **Twitter handle:** @untilhamza
6 months ago
Bagatur e3828bee43
core[patch]: Release 0.1.13 (#16287) 6 months ago
Bagatur 2454fefc53
docs: agent prompt docs (#16105) 6 months ago
Bagatur 84bf5787a7
core[patch], openai[patch]: Chat openai stream logprobs (#16218) 6 months ago
Carey 021b0484a8
community[patch]: add skipped test for inner product normalization (#14989)
---------

Co-authored-by: Erick Friis <erick@langchain.dev>
6 months ago
Christophe Bornet 3ccbe11363
community[minor]: Add Cassandra document loader (#16215)
- **Description:** document loader for Apache Cassandra
  - **Twitter handle:** cbornet_
6 months ago
mikeFore4 9d32af72ce
community[patch]: huggingface hub character removal bug fix (#16233)
- **Description:** Some text-generation models on huggingface repeat the
prompt in their generated response, but not all do! The tests use "gpt2"
which DOES repeat the prompt and as such, the HuggingFaceHub class is
hardcoded to remove the first few characters of the response (to match
the len(prompt)). However, if you are using a model (such as the very
popular "meta-llama/Llama-2-7b-chat-hf") that DOES NOT repeat the prompt
in it's generated text, then the beginning of the generated text will be
cut off. This code change fixes that bug by first checking whether the
prompt is repeated in the generated response and removing it
conditionally.
  - **Issue:** #16232 
  - **Dependencies:** N/A
  - **Twitter handle:** N/A
6 months ago
Andreas Motl 3613d8a2ad
community[patch]: Use SQLAlchemy's `bulk_save_objects` method to improve insert performance (#16244)
- **Description:** Improve [pgvector vector store
adapter](https://github.com/langchain-ai/langchain/blob/v0.1.1/libs/community/langchain_community/vectorstores/pgvector.py)
to save embeddings in batches, to improve its performance.
  - **Issue:** NA
  - **Dependencies:** NA
  - **References:** https://github.com/crate-workbench/langchain/pull/1


Hi again from the CrateDB team,

following up on GH-16243, this is another minor patch to the pgvector
vector store adapter. Inserting embeddings in batches, using
[SQLAlchemy's
`bulk_save_objects`](https://docs.sqlalchemy.org/en/20/orm/session_api.html#sqlalchemy.orm.Session.bulk_save_objects)
method, can deliver substantial performance gains.

With kind regards,
Andreas.

NB: As I am seeing just now that this method is a legacy feature of SA
2.0, it will need to be reworked on a future iteration. However, it is
not deprecated yet, and I haven't been able to come up with a different
implementation, yet.
6 months ago
Eugene Yurtsev 177af65dc4
core[minor]: RFC Add astream_events to Runnables (#16172)
This PR adds `astream_events` method to Runnables to make it easier to
stream data from arbitrary chains.

* Streaming only works properly in async right now
* One should use `astream()` with if mixing in imperative code as might
be done with tool implementations
* Astream_log has been modified with minimal additive changes, so no
breaking changes are expected
* Underlying callback code / tracing code should be refactored at some
point to handle things more consistently (OK for now)

- ~~[ ] verify event for on_retry~~ does not work until we implement
streaming for retry
- ~~[ ] Any rrenaming? Should we rename "event" to "hook"?~~
- [ ] Any other feedback from community?
- [x] throw NotImplementedError for `RunnableEach` for now

## Example

See this [Example
Notebook](dbbc7fa0d6/docs/docs/modules/agents/how_to/streaming_events.ipynb)
for an example with streaming in the context of an Agent

## Event Hooks Reference

Here is a reference table that shows some events that might be emitted
by the various Runnable objects.
Definitions for some of the Runnable are included after the table.


| event | name | chunk | input | output |

|----------------------|------------------|---------------------------------|-----------------------------------------------|-------------------------------------------------|
| on_chat_model_start | [model name] | | {"messages": [[SystemMessage,
HumanMessage]]} | |
| on_chat_model_stream | [model name] | AIMessageChunk(content="hello")
| | |
| on_chat_model_end | [model name] | | {"messages": [[SystemMessage,
HumanMessage]]} | {"generations": [...], "llm_output": None, ...} |
| on_llm_start | [model name] | | {'input': 'hello'} | |
| on_llm_stream | [model name] | 'Hello' | | |
| on_llm_end | [model name] | | 'Hello human!' |
| on_chain_start | format_docs | | | |
| on_chain_stream | format_docs | "hello world!, goodbye world!" | | |
| on_chain_end | format_docs | | [Document(...)] | "hello world!,
goodbye world!" |
| on_tool_start | some_tool | | {"x": 1, "y": "2"} | |
| on_tool_stream | some_tool | {"x": 1, "y": "2"} | | |
| on_tool_end | some_tool | | | {"x": 1, "y": "2"} |
| on_retriever_start | [retriever name] | | {"query": "hello"} | |
| on_retriever_chunk | [retriever name] | {documents: [...]} | | |
| on_retriever_end | [retriever name] | | {"query": "hello"} |
{documents: [...]} |
| on_prompt_start | [template_name] | | {"question": "hello"} | |
| on_prompt_end | [template_name] | | {"question": "hello"} |
ChatPromptValue(messages: [SystemMessage, ...]) |


Here are declarations associated with the events shown above:

`format_docs`:

```python
def format_docs(docs: List[Document]) -> str:
    '''Format the docs.'''
    return ", ".join([doc.page_content for doc in docs])

format_docs = RunnableLambda(format_docs)
```

`some_tool`:

```python
@tool
def some_tool(x: int, y: str) -> dict:
    '''Some_tool.'''
    return {"x": x, "y": y}
```

`prompt`:

```python
template = ChatPromptTemplate.from_messages(
    [("system", "You are Cat Agent 007"), ("human", "{question}")]
).with_config({"run_name": "my_template", "tags": ["my_template"]})
```
6 months ago
SN f175bf7d7b
Use env for revision id if not passed in as param; use `git describe` as backup (#16227)
Co-authored-by: William Fu-Hinthorn <13333726+hinthornw@users.noreply.github.com>
6 months ago
Erick Friis b9495da92d
langchain[patch]: fix stuff documents chain api docs render (#16159) 6 months ago
Erick Friis 0e76d84137
google-vertexai[patch]: more integration test fixes (#16234) 6 months ago
Erick Friis aa35b43bcd
docs, google-vertex[patch]: function docs (#16231) 6 months ago
Harrison Chase f60f59d69f
google-vertexai[patch]: Harrison/vertex function calling (#16223)
Co-authored-by: Erick Friis <erick@langchain.dev>
6 months ago
Rajesh Thallam 6bc6d64a12
langchain_google_vertexai[patch]: Add support for SystemMessage for Gemini chat model (#15933)
- **Description:** In Google Vertex AI, Gemini Chat models currently
doesn't have a support for SystemMessage. This PR adds support for it
only if a user provides additional convert_system_message_to_human flag
during model initialization (in this case, SystemMessage would be
prepended to the first HumanMessage). **NOTE:** The implementation is
similar to #14824


- **Twitter handle:** rajesh_thallam

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
6 months ago
Erick Friis 65b231d40b
mistralai[patch]: async integration tests (#16214) 6 months ago
Eugene Zapolsky 6b9e3ed9e9
google-vertexai[minor]: added safety_settings property to gemini wrapper (#15344)
**Description:** Gemini model has quite annoying default safety_settings
settings. In addition, current VertexAI class doesn't provide a property
to override such settings.
So, this PR aims to 
 - add safety_settings property to VertexAI
- fix issue with incorrect LLM output parsing when LLM responds with
appropriate 'blocked' response
- fix issue with incorrect parsing LLM output when Gemini API blocks
prompt itself as inappropriate
- add safety_settings related tests

I'm not enough familiar with langchain code base and guidelines. So, any
comments and/or suggestions are very welcome.
 
**Issue:** it will likely fix #14841

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
6 months ago
Eugene Yurtsev ecd4f0a7ec
core[patch]: testing add chat model for unit-tests (#16209)
This PR adds a fake chat model for testing purposes.

Used in this PR: https://github.com/langchain-ai/langchain/pull/16172
6 months ago
SN 7d444724d7
Add revision identifier to run_on_dataset (#16167)
Allow specifying revision identifier for better project versioning
6 months ago
Eugene Yurtsev 5d8c147332
docs: Document and test PydanticOutputFunctionsParser (#15759)
This PR adds documentation and testing to
`PydanticOutputFunctionsParser(OutputFunctionsParser)`.
6 months ago
Christophe Bornet 3502a407d9
infra: Use dotenv in langchain-community's integration tests (#16137)
* Removed some env vars not used in langchain package IT
* Added Astra DB env vars in langchain package, used for cache tests
* Added conftest.py to load env vars in langchain_community IT
* Added .env.example in  langchain_community IT
6 months ago
Nuno Campos ca014d5b04
Update readme (#16160)
<!-- Thank you for contributing to LangChain!

Please title your PR "<package>: <description>", where <package> is
whichever of langchain, community, core, experimental, etc. is being
modified.

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes if applicable,
  - **Dependencies:** any dependencies required for this change,
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` from the root
of the package you've modified to check this locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc: https://python.langchain.com/docs/contributing/

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->
6 months ago
Tomaz Bratanic 1e80113ac9
community[patch]: Add neo4j timeout and value sanitization option (#16138)
The timeout function comes in handy when you want to kill longrunning
queries.
The value sanitization removes all lists that are larger than 128
elements. The idea here is to remove embedding properties from results.
6 months ago
Krishna Shedbalkar f238217cea
community[patch]: Basic Logging and Human input to ShellTool (#15932)
- **Description:** As Shell tool is very versatile, while integrating it
into applications as openai functions, developers have no clue about
what command is being executed using the ShellTool. All one can see is:

![image](https://github.com/langchain-ai/langchain/assets/60742358/540e274a-debc-4564-9027-046b91424df3)

Summarising my feature request:
1. There's no visibility about what command was executed.
2. There's no mechanism to prevent a command to be executed using
ShellTool, like a y/n human input which can be accepted from user to
proceed with executing the command.,
  - **Issue:** the issue #15931 it fixes if applicable,
  - **Dependencies:** There isn't any dependancy,
  - **Twitter handle:** @krishnashed
6 months ago
Bagatur 679a3ae933
openai[patch]: clarify azure error (#16157) 6 months ago
Bagatur 7ad9eba8f4
core[patch]: Release 0.1.12 (#16161) 6 months ago
Leonid Kuligin 58f0ba306b
changed default params for gemini (#16044)
Replace this entire comment with:
- **Description:** changed default values for Vertex LLMs (to be handled
on the SDK's side)
6 months ago
Bagatur 5c73fd5bba
core[patch]: support old core namespaces (#16155) 6 months ago
Christophe Bornet fb940d11df
community[patch]: Use newer MetadataVectorCassandraTable in Cassandra vector store (#15987)
as VectorTable is deprecated

Tested manually with `test_cassandra.py` vector store integration test.
6 months ago
Mohammad Mohtashim 1fa056c324
community[patch]: Don't set search path for unknown SQL dialects (#16047)
- **Description:** Made a small fix for the `SQLDatabase` highlighted in
an issue. The issue pertains to switching schema for different SQL
engines. 
  - **Issue:** #16023
@baskaryan
6 months ago
Erick Friis 11327e6b64
google-vertexai[patch]: typing, release 0.0.2 (#16153) 6 months ago
Leonid Ganeline 2709d3e5f2
langchain[patch]: updated imports for `langchain.callbacks` (#16060)
Updated imports from 'langchain` to `core` where it is possible

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
6 months ago
Leonid Ganeline c5f6b828ad
langchain[patch], community[minor]: move `output_parsers.ernie_functions` (#16057)
`output_parsers.ernie_functions` moved into `community`
6 months ago
Leonid Ganeline 49aff3ea5b
langchain[patch]: updated `agents` imports (#16061)
Updated imports into `langchain` to `core` where it is possible

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
6 months ago
Leonid Ganeline 60b1bd02d7
langchain[patch]: updated imports for `output_parsers` (#16059)
Updated imports from `langchain` to `core` where it is possible
6 months ago
Leonid Ganeline 9e9ad9b0e9
langchain[patch]: updated `retrievers` imports (#16062)
Updated imports into `langchain` to `core` where it is possible

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
6 months ago
Leonid Ganeline d350be959d
langchain[patch]: updated `chains` imports (#16064)
Updated imports into `langchain` to `core` where it is possible

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
6 months ago
Fei Wang d0e101e4e0
community[patch]: fix ollama astream (#16070)
Update ollama.py
6 months ago
ChengZi 8597484195
langchain[patch]: support more comparators in Milvus self-querying retriever (#16076)
- **Description:** Support IN and LIKE comparators in Milvus
self-querying retriever, based on [Boolean Expression
Rules](https://milvus.io/docs/boolean.md)
  - **Issue:** No
  - **Dependencies:** No
  - **Twitter handle:** No

Signed-off-by: ChengZi <chen.zhang@zilliz.com>
6 months ago
Kapil Sachdeva f406dc3872
docs: in RunnableRetry, correct the example snippet that uses with_retry method on Runnable (#16108)
The example code snippet for with_retry is using incorrect argument
names. This PR fixes that
6 months ago
BeatrixCohere b0c3e3db2b
community[patch]: Handle when documents are not provided in the Cohere response (#16144)
- **Description:** This handles the cohere response when documents
aren't included in the response
  - **Issue:** N/A
  - **Dependencies:** N/A
  - **Twitter handle:** N/A
6 months ago
Felix Krones d91126fc64
community[patch]: missing unpack operator for or_clause in pgvector document filter (#16148)
- Fix for #16146 
- Adding unpack operation to "or" and "and" filter for pgvector
retriever. #
6 months ago
Erick Friis 06fe2f4fb0
partners: add license field (#16117)
- bumps package post versions for packages without current unreleased
updates
- will bump package version in release prs associated with packages that
do have changes (mistral, vertex)
6 months ago
Erick Friis ce10fe0c2f
mistralai[patch]: release 0.0.3 (#16116)
embeddings
6 months ago
William FH e5cf1e2414
Community[patch]use secret str in Tavily and HuggingFaceInferenceEmbeddings (#16109)
So the api keys don't show up in repr's 

Still need to do tests
6 months ago
William FH f3601b0aaf
Community[Patch] Remove docs form bm25 repr (#16110)
Resolves: https://github.com/langchain-ai/langsmith-sdk/issues/356
6 months ago
David c323742f4f
mistralai[minor]: Add embeddings (#15282)
- **Description:** Adds MistralAIEmbeddings class for embeddings, using
the new official API.
- **Dependencies:** mistralai
- **Tag maintainer**: @efriis, @hwchase17
- **Twitter handle:** @LMS_David_RS

Create `integrations/text_embedding/mistralai.ipynb`: an example
notebook for MistralAIEmbeddings class
Modify `embeddings/__init__.py`: Import the class
Create `embeddings/mistralai.py`: The embedding class
Create `integration_tests/embeddings/test_mistralai.py`: The test file.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
6 months ago
Leonid Kuligin 4df14a61fc
google-vertexai[minor]: add function calling on VertexAI (#15822)
Replace this entire comment with:
  - **Description:** Description: added support for tools on VertexAI
  - **Issue:** #15073 
  - **Twitter handle:**  lkuligin

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
6 months ago
Bagatur 8840a8cc95
docs: tool-use use case (#15783)
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
6 months ago
Bagatur 3d34347a85
langchain[patch]: bump core dep to 0.1.9 (#16104) 6 months ago
Bagatur 62a2e9ee19
langchain[patch]: Release 0.1.1 (#16103) 6 months ago
Bagatur 076593382a
core[patch]: Release 0.1.11 (#16100) 6 months ago
Bagatur c5656a4905
core[patch]: pass exceptions to fallbacks (#16048) 6 months ago
Nuno Campos 770f57196e
Add unit test for overridden lc_namespace (#16093) 6 months ago
Erick Friis 52114bdfac
community[patch]: release 0.0.13 (#16087) 6 months ago
James Briggs ca288d8f2c
community[patch]: add vector param to index query for pinecone vec store (#16054) 6 months ago
Antonio Morales 476fb328ee
community[patch]: implement adelete from VectorStore in Qdrant (#16005)
**Description:**
Implement `adelete` function from `VectorStore` in `Qdrant` to support
other asynchronous flows such as async indexing (`aindex`) which
requires `adelete` to be implemented. Since `Qdrant` can be passed an
async qdrant client, this can be supported easily.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
6 months ago
Bagatur 697a6f2c80
langchain[patch]: fix requests lint (#16049) 6 months ago
高远 061e63eef2
community[minor]: add vikingdb vecstore (#15155)
---------

Co-authored-by: gaoyuan <gaoyuan.20001218@bytedance.com>
6 months ago
andrijdavid d196646811
community[patch]: Refactor OpenAIWhisperParserLocal (#15150)
This PR addresses an issue in OpenAIWhisperParserLocal where requesting
CUDA without availability leads to an AttributeError #15143

Changes:

- Refactored Logic for CUDA Availability: The initialization now
includes a check for CUDA availability. If CUDA is not available, the
code falls back to using the CPU. This ensures seamless operation
without manual intervention.
- Parameterizing Batch Size and Chunk Size: The batch_size and
chunk_size are now configurable parameters, offering greater flexibility
and optimization options based on the specific requirements of the use
case.

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
6 months ago
Zhichao HAN 5cf06db3b3
community[minor]: add JsonRequestsWrapper tool (#15374)
**Description:** This new feature enhances the flexibility of pipeline
integration, particularly when working with RESTful APIs.
``JsonRequestsWrapper`` allows for the decoding of JSON output, instead
of the only option for text output.

---------

Co-authored-by: Zhichao HAN <hanzhichao2000@hotmail.com>
6 months ago
chyroc d334efc848
community[patch]: fix top_p type hint (#15452)
fix: https://github.com/langchain-ai/langchain/issues/15341

@efriis
6 months ago
Mateusz Szewczyk 251afda549
community[patch]: fix stop (stop_sequences) param on WatsonxLLM (#15541)
- **Description:** Fix to IBM
[watsonx.ai](https://www.ibm.com/products/watsonx-ai) LLM provider (stop
(`stop_sequences`) param on watsonxLLM)
- **Dependencies:**
[ibm-watsonx-ai](https://pypi.org/project/ibm-watsonx-ai/),
6 months ago
Funkeke 7220124368
community[patch]: fix tongyi completion and params error (#15544)
fix tongyi completion json parse error and prompt's params error

---------

Co-authored-by: fangkeke <3339698829@qq.com>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
6 months ago
盐粒 Yanli ddf4e7c633
community[minor]: Update pgvecto_rs to use its high level sdk (#15574)
- **Description:** Update pgvecto_rs to use its high level sdk, 
  - **Issue:** fix #15173
6 months ago
YHW ce21392a21
community: add a flag that determines whether to load the milvus collection (#15693)
fix https://github.com/langchain-ai/langchain/issues/15694

---------

Co-authored-by: hyungwookyang <hyungwookyang@worksmobile.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
6 months ago
Mohammad Mohtashim 9e779ca846
community[patch]: Fixing the SlackGetChannel Tool Input Error (#15725)
Fixed the issue mentioned in #15698 for SlackGetChannel Tool.

@baskaryan.

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
6 months ago
axiangcoding daa9ccae52
community[patch]: deprecate ErnieBotChat and ErnieEmbeddings classes (#15862)
- **Description:** add deprecated warning for ErnieBotChat and
ErnieEmbeddings.
- These two classes **lack maintenance** and do not use the sdk provided
by qianfan, which means hard to implement some key feature like
streaming.
- The alternative `langchain_community.chat_models.QianfanChatEndpoint`
and `langchain_community.embeddings.QianfanEmbeddingsEndpoint` can
completely replace these two classes, only need to change configuration
items.
  - **Issue:** None,
  - **Dependencies:** None,
  - **Twitter handle:** None

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
6 months ago
JaguarDB b11fd3bedc
community[patch]: jaguar vector store fix integer-element error when joining metadata values (#15939)
- **Description:** some document loaders add integer-type metadata
values which cause error
  - **Issue:** 15937
  - **Dependencies:** none

---------

Co-authored-by: JY <jyjy@jaguardb>
6 months ago
Neo Zhao 21e0df937f
community[patch]: fix a bug that mistakenly handle zip iterator in FAISS.from_embeddings (#16020)
**Description**: `zip` is iterator that will only produce result once,
so the previous code will cause the `embeddings` to be an empty list.

**Issue**: I could not find a related issue.

**Dependencies**: this PR does not introduce or affect dependencies.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
6 months ago
Christophe Bornet 15c2b4a47e
community[minor]: Add AstraDB self query retriever (#15738)
- **Description:** this change adds a self-query retriever for AstraDB
  - **Twitter handle:** cbornet_
6 months ago
Leonid Ganeline fb676d8a9b
community[minor], langchain[minor]: refactor `output_parsers` Rail (#15852)
Moved Rail parser to `community` package.
6 months ago
Massimiliano Pronesti e80aab2275
docs(community): update Amadeus toolkit to langchain v0.1 (#15976)
- **Description:** docs update following the changes introduced in
#15879

<!-- Thank you for contributing to LangChain!

Please title your PR "<package>: <description>", where <package> is
whichever of langchain, community, core, experimental, etc. is being
modified.

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes if applicable,
  - **Dependencies:** any dependencies required for this change,
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` from the root
of the package you've modified to check this locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc: https://python.langchain.com/docs/contributing/

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->
6 months ago
Ashley Xu ce7723c1e5
community[minor]: add additional support for `BigQueryVectorSearch` (#15904)
BigQuery vector search lets you use GoogleSQL to do semantic search,
using vector indexes for fast but approximate results, or using brute
force for exact results.

This PR:
1. Add `metadata[_job_ib]` in Document returned by any similarity search
2. Add `explore_job_stats` to enable users to explore job statistics and
better the debuggability
3. Set the minimum row limit for running create vector index.
6 months ago
Mohammed Naqi 8799b028a6
community[minor]: Adding asynchronous function implementation for Doctran (#15941)
## Description 
In this update, I addressed the missing implementation for
atransform_document, which is the asynchronous counterpart of
transform_document in Doctran.

### Usage Example:
```py
# Instantiate DoctranPropertyExtractor with specified properties
property_extractor = DoctranPropertyExtractor(properties=properties)

# Asynchronously extract properties from a list of documents
extracted_document = await property_extractor.atransform_documents(
    documents, properties=properties
)

# Display metadata of the first extracted document
print(json.dumps(extracted_document[0].metadata, indent=2))

```

## Issue
- Pull request #14525 has caused a break in the aforementioned code.
Instead of removing an asynchronous implementation of a function,
consider implementing a synchronous version alongside it.
6 months ago
Raunak c0773ab329
community[patch]: Fixed 'coroutine' object is not subscriptable error (#15986)
- **Description:** Added parenthesis in return statement of
aembed_query() funtion to fix 'coroutine' object is not subscriptable
error.
  - **Dependencies:** NA

Co-authored-by: H161961 <Raunak.Raunak@Honeywell.com>
6 months ago
Karim Lalani 14244bd7e5
community[minor]: Added document loader for SurrealDB (#15995)
Added a simple document loader to work with SurrealDB.
6 months ago
Karim Lalani 768e5e33bc
community[minor]: Fix to match SurrealDB 0.3.2 SDK (#15996)
New version of SurrealDB python sdk was causing the integration to
break.
This fix addresses that change.
6 months ago
shahrin014 86321a949f
community: Ollama - Parameter structure to follow official documentation (#16035)
## Feature
- Follow parameter structure as per official documentation 
- top level parameters (e.g. model, system, template) will be passed as
top level parameters
  - other parameters will be sent in options unless options is provided

![image](https://github.com/langchain-ai/langchain/assets/17451563/d14715d9-9701-4ee3-b44b-89fffea62389)

## Tests
- Test if top level parameters handled properly
- Test if parameters that are not top level parameters are handled as
options
- Test if options is provided, it will be passed as is
6 months ago
Nir Kopler 0fa06732b7
community: add new gpt-3.5-turbo-1106 finetuned for cost calculation (#16039)
**Description:** Added the new gpt-3.5-turbo-1106 for **finetuned** cost
calculation,
**Issue:** no issue found open

By the information in OpenAI the pricing is the same as the older model
(0613)
6 months ago
Bagatur bccb07f93e
core[patch]: simple prompt pretty printing (#15968) 6 months ago
Virat Singh eb6e385dc5
community: Add PolygonAPIWrapper and get_last_quote endpoint (#15971)
- **Description:** Added a `PolygonAPIWrapper` and an initial
`get_last_quote` endpoint, which allows us to get the last price quote
for a given `ticker`. Once merged, I can add a Polygon tool in `tools/`
for agents to use.
- **Twitter handle:** [@virattt](https://twitter.com/virattt)

The Polygon.io Stocks API provides REST endpoints that let you query the
latest market data from all US stock exchanges.
6 months ago