Commit Graph

9964 Commits

Author SHA1 Message Date
Tomaz Bratanic
76a193decc
community[patch]: Add function response to graph cypher qa chain (#22690)
LLMs struggle with Graph RAG, because it's different from vector RAG in
a way that you don't provide the whole context, only the answer and the
LLM has to believe. However, that doesn't really work a lot of the time.
However, if you wrap the context as function response the accuracy is
much better.

btw... `union[LLMChain, Runnable]` is linting fun, that's why so many
ignores
2024-06-10 13:52:17 -07:00
X-HAN
34edfe4a16
community[minor]: add Volcengine Rerank (#22700)
**Description:** this PR adds Volcengine Rerank capability to Langchain,
you can find Volcengine Rerank API from
[here](https://www.volcengine.com/docs/84313/1254474) &
[here](https://www.volcengine.com/docs/84313/1254605).
[Volcengine](https://www.volcengine.com/) is a cloud service platform
developed by ByteDance, the parent company of TikTok. You can obtain
Volcengine API AK/SK from
[here](https://www.volcengine.com/docs/84313/1254553).

**Dependencies:** VolcengineRerank depends on `volcengine` python
package.

**Twitter handle:** my twitter/x account is https://x.com/LastMonopoly
and I'd like a mention, thank you!


**Tests and docs**
  1. integration test: `test_volcengine_rerank.py`
  2. example notebook: `volcengine_rerank.ipynb`

**Lint and test**: I have run `make format`, `make lint` and `make test`
from the root of the package I've modified.
2024-06-10 13:41:05 -07:00
Prakul
9eacce9356
docs:Update reference to langchain-mongodb (#22705)
**Description**: Update reference to langchain-mongodb
2024-06-10 13:35:21 -07:00
Ikko Eltociear Ashimine
4197c9c85f
docs: update azure_container_apps_dynamic_sessions_data_analyst.ipynb (#22718)
colum -> column
2024-06-10 13:33:40 -07:00
Jacob Lee
e4183cbc4e
docs[patch]: Add caution on OpenAI LLMs integration page (#22754)
@baskaryan do we like?

<img width="1040" alt="Screenshot 2024-06-10 at 12 16 45 PM"
src="https://github.com/langchain-ai/langchain/assets/6952323/8893063f-1acf-4a56-9ee5-a8a2b1560277">
2024-06-10 13:27:22 -07:00
Mohammad Mohtashim
c3cce98d86
community[patch]: Small Fix in OutlookMessageLoader (Close the Message once Open) (#22744)
- **Description:** A very small fix where we close the message when it
opened
- **Issue:** #22729
2024-06-10 13:08:39 -07:00
Bagatur
86a3f6edf1
docs: standardize ChatVertexAI (#22686)
Part of #22296. Part two of
https://github.com/langchain-ai/langchain-google/pull/287
2024-06-10 12:50:50 -07:00
ccurme
f9fdca6cc2
openai: add parallel_tool_calls to api ref (#22746)
![Screenshot 2024-06-10 at 1 41 24
PM](https://github.com/langchain-ai/langchain/assets/26529506/2626bf9c-41c6-4431-b2e1-f59de1e4e468)
2024-06-10 17:44:43 +00:00
Max Mulatz
058a64c563
Community[minor]: Add language parser for Elixir (#22742)
Hi 👋 

First off, thanks a ton for your work on this 💚 Really appreciate what
you're providing here for the community.

## Description

This PR adds a basic language parser for the
[Elixir](https://elixir-lang.org/) programming language. The parser code
is based upon the approach outlined in
https://github.com/langchain-ai/langchain/pull/13318: it's using
`tree-sitter` under the hood and aligns with all the other `tree-sitter`
based parses added that PR.

The `CHUNK_QUERY` I'm using here is probably not the most sophisticated
one, but it worked for my application. It's a starting point to provide
"core" parsing support for Elixir in LangChain. It enables people to use
the language parser out in real world applications which may then lead
to further tweaking of the queries. I consider this PR just the ground
work.

- **Dependencies:** requires `tree-sitter` and `tree-sitter-languages`
from the extended dependencies
- **Twitter handle:**`@bitcrowd`

## Checklist

- [x] **PR title**: "package: description"
- [x] **Add tests and docs**
- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified.

<!-- If no one reviews your PR within a few days, please @-mention one
of baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17. -->
2024-06-10 15:56:57 +00:00
wangda
28e956735c
docs:Correcting spelling mistakes in readme (#22664)
Signed-off-by: zhangwangda <zhangwangda94@163.com>
2024-06-10 15:33:41 +00:00
Gin
6f54abc252
docs: Add a missing dot in concepts.mdx (#22677) 2024-06-10 15:30:56 +00:00
Philippe PRADOS
2d4689d721
langchain[minor]: Add pgvector to list of supported vectorstores in self query retriever (#22678)
The fact that we outsourced pgvector to another project has an
unintended effect. The mapping dictionary found by
`_get_builtin_translator()` cannot recognize the new version of pgvector
because it comes from another package.
`SelfQueryRetriever` no longer knows `PGVector`.

I propose to fix this by creating a global dictionary that can be
populated by various database implementations. Thus, importing
`langchain_postgres` will allow the registration of the `PGvector`
mapping.

But for the moment I'm just adding a lazy import

Furthermore, the implementation of _get_builtin_translator()
reconstructs the BUILTIN_TRANSLATORS variable with each invocation,
which is not very efficient. A global map would be an optimization.

- **Twitter handle:** pprados

@eyurtsev, can you review this PR? And unlock the PR [Add async mode for
pgvector](https://github.com/langchain-ai/langchain-postgres/pull/32)
and PR [community[minor]: Add SQL storage
implementation](https://github.com/langchain-ai/langchain/pull/22207)?

Are you in favour of a global dictionary-based implementation of
Translator?
2024-06-10 11:27:47 -04:00
Lei Zhang
5ba1899cd7
infra: Scheduled GitHub Actions to run only on the upstream repository (#22707)
**Description:** Scheduled GitHub Actions to run only on the upstream
repository

**Issue:** Fixes #22706 

**Twitter handle:** @coolbeevip
2024-06-10 11:07:42 -04:00
Prakul
3f76c9e908
docs: Update MongoDB information in llm_caching (#22708)
**Description:**: Update MongoDB information in llm_caching
2024-06-10 11:05:55 -04:00
fzowl
c1fced9269
docs: VoyageAI new embedding and reranking models (#22719) 2024-06-09 09:12:43 -07:00
Enzo Poggio
8f019e91d7
community[patch]: Use Custom Logger Instead of Root Logger in get_user_agent Function (#22691)
## Description
This PR addresses a logging inconsistency in the `get_user_agent`
function. Previously, the function was using the root logger to log a
warning message when the "USER_AGENT" environment variable was not set.
This bypassed the custom logger `log` that was created at the start of
the module, leading to potential inconsistencies in logging behavior.

Changes:
- Replaced `logging.warning` with `log.warning` in the `get_user_agent`
function to ensure that the custom logger is used.

This change ensures that all logging in the `get_user_agent` function
respects the configurations of the custom logger, leading to more
consistent and predictable logging behavior.

## Dependencies

None

## Issue 

None

## Tests and docs

☝🏻 see description


## `make format`, `make lint` & `cd libs/community; make test`

```shell
> make format 
poetry run ruff format docs templates cookbook
1417 files left unchanged
poetry run ruff check --select I --fix docs templates cookbook
All checks passed!
```

```shell
> make lint
poetry run ruff check docs templates cookbook
All checks passed!
poetry run ruff format docs templates cookbook --diff
1417 files already formatted
poetry run ruff check --select I docs templates cookbook
All checks passed!
git grep 'from langchain import' docs/docs templates cookbook | grep -vE 'from langchain import (hub)' && exit 1 || exit 0
```

~cd libs/community; make test~ too much dependencies for integration ...

```shell
>  poetry run pytest tests/unit_tests   
....
==== 884 passed, 466 skipped, 4447 warnings in 15.93s ====
```

I choose you randomly : @ccurme
2024-06-08 02:33:07 +00:00
Philippe PRADOS
9aabb446c5
community[minor]: Add SQL storage implementation (#22207)
Hello @eyurtsev

- package: langchain-comminity
- **Description**: Add SQL implementation for docstore. A new
implementation, in line with my other PR ([async
PGVector](https://github.com/langchain-ai/langchain-postgres/pull/32),
[SQLChatMessageMemory](https://github.com/langchain-ai/langchain/pull/22065))
- Twitter handler: pprados

---------

Signed-off-by: ChengZi <chen.zhang@zilliz.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Piotr Mardziel <piotrm@gmail.com>
Co-authored-by: ChengZi <chen.zhang@zilliz.com>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-06-07 21:17:02 +00:00
Nithish Raghunandanan
f2f0e0e13d
couchbase: Add the initial version of Couchbase partner package (#22087)
Co-authored-by: Nithish Raghunandanan <nithishr@users.noreply.github.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-06-07 14:04:08 -07:00
Cahid Arda Öz
6c07eb0c12
community[minor]: Add UpstashRatelimitHandler (#21885)
Adding `UpstashRatelimitHandler` callback for rate limiting based on
number of chain invocations or LLM token usage.

For more details, see [upstash/ratelimit-py
repository](https://github.com/upstash/ratelimit-py) or the notebook
guide included in this PR.

Twitter handle: @cahidarda

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-06-07 21:02:06 +00:00
Erick Friis
9b3ce16982
docs: remove nonexistent headings (#22685) 2024-06-07 20:02:06 +00:00
Erick Friis
9e03864d64
core: add error message for non-structured llm to StructuredPrompt (#22684)
previously was the blank `NotImplementedError` from
`BaseLanguageModel.with_structured_output`
2024-06-07 19:42:09 +00:00
Jacob Lee
02ff78deb8
docs[patch]: Adds LangGraph and LangSmith links, adds more crosslinks between pages (#22656)
@baskaryan @hwchase17
2024-06-07 10:22:29 -07:00
Mateusz Szewczyk
c3a8716589
docs: Updated product version in Embeddings notebook (#22062) 2024-06-07 08:11:03 -07:00
ccurme
f32d57f6f0
anthropic: refactor streaming to use events api; add streaming usage metadata (#22628)
- Refactor streaming to use raw events;
- Add `stream_usage` class attribute and kwarg to stream methods that,
if True, will include separate chunks in the stream containing usage
metadata.

There are two ways to implement streaming with anthropic's python sdk.
They have slight differences in how they surface usage metadata.
1. [Use helper
functions](https://github.com/anthropics/anthropic-sdk-python?tab=readme-ov-file#streaming-helpers).
This is what we are doing now.
```python
count = 1
with client.messages.stream(**params) as stream:
    for text in stream.text_stream:
        snapshot = stream.current_message_snapshot
        print(f"{count}: {snapshot.usage} -- {text}")
        count = count + 1

final_snapshot = stream.get_final_message()
print(f"{count}: {final_snapshot.usage}")
```
```
1: Usage(input_tokens=8, output_tokens=1) -- Hello
2: Usage(input_tokens=8, output_tokens=1) -- !
3: Usage(input_tokens=8, output_tokens=1) --  How
4: Usage(input_tokens=8, output_tokens=1) --  can
5: Usage(input_tokens=8, output_tokens=1) --  I
6: Usage(input_tokens=8, output_tokens=1) --  assist
7: Usage(input_tokens=8, output_tokens=1) --  you
8: Usage(input_tokens=8, output_tokens=1) --  today
9: Usage(input_tokens=8, output_tokens=1) -- ?
10: Usage(input_tokens=8, output_tokens=12)
```
To do this correctly, we need to emit a new chunk at the end of the
stream containing the usage metadata.

2. [Handle raw
events](https://github.com/anthropics/anthropic-sdk-python?tab=readme-ov-file#streaming-responses)
```python
stream = client.messages.create(**params, stream=True)
count = 1
for event in stream:
    print(f"{count}: {event}")
    count = count + 1
```
```
1: RawMessageStartEvent(message=Message(id='msg_01Vdyov2kADZTXqSKkfNJXcS', content=[], model='claude-3-haiku-20240307', role='assistant', stop_reason=None, stop_sequence=None, type='message', usage=Usage(input_tokens=8, output_tokens=1)), type='message_start')
2: RawContentBlockStartEvent(content_block=TextBlock(text='', type='text'), index=0, type='content_block_start')
3: RawContentBlockDeltaEvent(delta=TextDelta(text='Hello', type='text_delta'), index=0, type='content_block_delta')
4: RawContentBlockDeltaEvent(delta=TextDelta(text='!', type='text_delta'), index=0, type='content_block_delta')
5: RawContentBlockDeltaEvent(delta=TextDelta(text=' How', type='text_delta'), index=0, type='content_block_delta')
6: RawContentBlockDeltaEvent(delta=TextDelta(text=' can', type='text_delta'), index=0, type='content_block_delta')
7: RawContentBlockDeltaEvent(delta=TextDelta(text=' I', type='text_delta'), index=0, type='content_block_delta')
8: RawContentBlockDeltaEvent(delta=TextDelta(text=' assist', type='text_delta'), index=0, type='content_block_delta')
9: RawContentBlockDeltaEvent(delta=TextDelta(text=' you', type='text_delta'), index=0, type='content_block_delta')
10: RawContentBlockDeltaEvent(delta=TextDelta(text=' today', type='text_delta'), index=0, type='content_block_delta')
11: RawContentBlockDeltaEvent(delta=TextDelta(text='?', type='text_delta'), index=0, type='content_block_delta')
12: RawContentBlockStopEvent(index=0, type='content_block_stop')
13: RawMessageDeltaEvent(delta=Delta(stop_reason='end_turn', stop_sequence=None), type='message_delta', usage=MessageDeltaUsage(output_tokens=12))
14: RawMessageStopEvent(type='message_stop')
```

Here we implement the second option, in part because it should make
things easier when implementing streaming tool calls in the near future.

This would add two new chunks to the stream-- one at the beginning and
one at the end-- with blank content and containing usage metadata. We
add kwargs to the stream methods and a class attribute allowing for this
behavior to be toggled. I enabled it by default. If we merge this we can
add the same kwargs / attribute to OpenAI.

Usage:
```python
from langchain_anthropic import ChatAnthropic

model = ChatAnthropic(
    model="claude-3-haiku-20240307",
    temperature=0
)

full = None
for chunk in model.stream("hi"):
    full = chunk if full is None else full + chunk
    print(chunk)

print(f"\nFull: {full}")
```
```
content='' id='run-8a20843f-25c7-4025-ad72-9add395899e3' usage_metadata={'input_tokens': 8, 'output_tokens': 0, 'total_tokens': 8}
content='Hello' id='run-8a20843f-25c7-4025-ad72-9add395899e3'
content='!' id='run-8a20843f-25c7-4025-ad72-9add395899e3'
content=' How' id='run-8a20843f-25c7-4025-ad72-9add395899e3'
content=' can' id='run-8a20843f-25c7-4025-ad72-9add395899e3'
content=' I' id='run-8a20843f-25c7-4025-ad72-9add395899e3'
content=' assist' id='run-8a20843f-25c7-4025-ad72-9add395899e3'
content=' you' id='run-8a20843f-25c7-4025-ad72-9add395899e3'
content=' today' id='run-8a20843f-25c7-4025-ad72-9add395899e3'
content='?' id='run-8a20843f-25c7-4025-ad72-9add395899e3'
content='' id='run-8a20843f-25c7-4025-ad72-9add395899e3' usage_metadata={'input_tokens': 0, 'output_tokens': 12, 'total_tokens': 12}

Full: content='Hello! How can I assist you today?' id='run-8a20843f-25c7-4025-ad72-9add395899e3' usage_metadata={'input_tokens': 8, 'output_tokens': 12, 'total_tokens': 20}
```
2024-06-07 13:21:46 +00:00
Bagatur
235d91940d
community[patch]: Release 0.2.4 (#22643) 2024-06-06 17:47:44 -07:00
Francesco Kruk
344adad056
docs: Update jina embedding notebook to include multimodal capability (#22594)
After merging the [PR #22416 to include Jina AI multimodal
capabilities](https://github.com/langchain-ai/langchain/pull/22416), we
updated the Jina AI embedding notebook accordingly.
2024-06-07 00:02:20 +00:00
William FH
be79ce9336
[Core] Unified Enable/Disable Tracing (#22576) 2024-06-06 16:54:35 -07:00
Leonid Ganeline
57c1239643
docs: arxiv page update (#22574)
Added a link to search the arXiv papers with references to LangChain.
Updated table: better format (no horizontal scroll in table anymore).
2024-06-06 16:51:02 -07:00
Bagatur
fe2e5a3b74
langchain[patch]: Release 0.2.3 (#22644) 2024-06-06 16:29:18 -07:00
Erick Friis
a24a9c6427
multiple: get rid of pyproject extras (#22581)
They cause `poetry lock` to take a ton of time, and `uv pip install` can
resolve the constraints from these toml files in trivial time
(addressing problem with #19153)

This allows us to properly upgrade lockfile dependencies moving forward,
which revealed some issues that were either fixed or type-ignored (see
file comments)
2024-06-06 15:45:22 -07:00
Bagatur
4367e89c9a
core[patch]: Release 0.2.5 (#22642) 2024-06-06 15:44:26 -07:00
Eugene Yurtsev
28f744c1f5
core[patch]: Correctly order parent ids in astream events (from root to immediate parent), add defensive check for cycles (#22637)
This PR makes two changes:

1. Fixes the order of parent IDs to be from root to immediate parent
2. Adds a simple defensive check for cycles
2024-06-06 20:37:52 +00:00
Satyam Kumar
835926153b
updated oracleai_demo.ipynb (#22635)
The outer try/except block handles connection errors, and the inner
try/except block handles SQL execution errors, providing detailed error
messages for both.
try:
    conn = oracledb.connect(user=username, password=password, dsn=dsn)
    print("Connection successful!")

    cursor = conn.cursor()
    try:
        cursor.execute(
            """
            begin
                -- Drop user
                begin
                    execute immediate 'drop user testuser cascade';
                exception
                    when others then
dbms_output.put_line('Error dropping user: ' || SQLERRM);
                end;

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-06-06 20:29:24 +00:00
Eugene Yurtsev
035a9c9609
core[minor]: Add parent_ids to astream_events API (#22563)
Include a list of parent ids for each event in astream events.
2024-06-06 16:14:28 -04:00
Tomaz Bratanic
67e58fdc2e
docs[patch]: Fix diffbot docs (#22584) 2024-06-06 16:08:59 -04:00
Eugene Yurtsev
6b8963ad92
docs: Add information about run time binding values to tools (#22623)
Add how-to guide that shows a design pattern for creating tools at run time
2024-06-06 16:05:34 -04:00
CharlesCNorton
aa49163bdf
docs[patch]: typo in AutoGPT example notebook (#22631)
Corrected a typo in the AutoGPT example notebook. Changed "Needed synce
jupyter runs an async eventloop" to "Needed since Jupyter runs an async
event loop".

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-06-06 16:05:11 -04:00
CharlesCNorton
ffe75d1e46
docs: typo in dev container documentation (#22630)
removed an extra space before the period in the "Click **Create
codespace on master**." line.

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-06-06 16:04:48 -04:00
Nicolas Nkiere
51005e2776
core[minor]: Add an async root listener and with_alisteners method (#22151)
- [x] **Adding AsyncRootListener**: "langchain_core: Adding
AsyncRootListener"

- **Description:** Adding an AsyncBaseTracer, AsyncRootListener and
`with_alistener` function. This is to enable binding async root listener
to runnables. This currently only supported for sync listeners.
- **Issue:** None
- **Dependencies:** None

- [x] **Add tests and docs**: Added units tests and example snippet code
within the function description of `with_alistener`


- [x] **Lint and test**: Run make format_diff, make lint_diff and make
test
2024-06-06 16:03:44 -04:00
seyf97
2904c50cd5
openai[patch]: correct grammar in exception message in embeddings/base.py (#22629)
Correct the grammar error for missing transformers package ValueError
2024-06-06 18:55:04 +00:00
Anush
80560419b0
qdrant[patch]: Make path optional in from_existing_collection() (#21875)
## Description

The `path` param is used to specify the local persistence directory,
which isn't required if using Qdrant server.

This is a breaking but necessary change.
2024-06-06 10:37:08 -07:00
ccurme
b57aa89f34
multiple: implement ls_params (#22621)
implement ls_params for ai21, fireworks, groq.
2024-06-06 16:51:37 +00:00
Xiangrui Meng
f26ab93df8
community: support Databricks Unity Catalog functions as LangChain tools (#22555)
This PR adds support for using Databricks Unity Catalog functions as
LangChain tools, which runs inside a Databricks SQL warehouse.

* An example notebook is provided.
2024-06-06 09:38:50 -07:00
ccurme
c1ef731503
anthropic: update attribute name and alias (#22625)
update name to `stop_sequences` and alias to `stop` (instead of the
other way around), since `stop_sequences` is the name used by anthropic.
2024-06-06 12:29:10 -04:00
lucasiscovici
05bf98b2f9
community[patch]: pgvector replace nin_ by not_in (#22619)
- [ ] **community**: "pgvector: replace nin_ by not_in"

- [ ] **PR message**: nin_ do not exist in sqlalchemy orm, it's not_in
2024-06-06 12:17:22 -04:00
ccurme
3999761201
multiple: add stop attribute (#22573) 2024-06-06 12:11:52 -04:00
ccurme
e08879147b
Revert "anthropic: stream token usage" (#22624)
Reverts langchain-ai/langchain#20180
2024-06-06 12:05:08 -04:00
Bagatur
0d495f3f63
anthropic: stream token usage (#20180)
open to other ideas
<img width="1181" alt="Screenshot 2024-04-08 at 5 34 08 PM"
src="https://github.com/langchain-ai/langchain/assets/22008038/03eb11c4-5eb5-43e3-9109-a13f76098fa4">

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-06-06 11:51:34 -04:00
liuzc9
e0e40f3f63
docs: Fix typo in llmonitor.md (#22590) 2024-06-06 15:26:51 +00:00
Bagatur
feb73d4281
docs: Add ChatGoogleGenerativeAI to model feat table (#22617) 2024-06-06 08:07:13 -07:00