Commit Graph

799 Commits

Author SHA1 Message Date
Bagatur
8e4dbae428
Add fireworks chat model (#11117) 2023-09-27 08:22:12 -07:00
Harrison Chase
6b4928ad96
fix-lcel-notebooks (#11111)
fix some missing imports/naming
2023-09-27 06:36:11 -07:00
Cynthia Yang
6dd44ff1c0
Refactor Fireworks and add ChatFireworks (#3) (#10597)
Description 
* Refactor Fireworks within Langchain LLMs.
* Remove FireworksChat within Langchain LLMs.
* Add ChatFireworks (which uses chat completion api) to Langchain chat
models.
* Users have to install `fireworks-ai` and register an api key to use
the api.

Issue - Not applicable
Dependencies - None
Tag maintainer - @rlancemartin @baskaryan
2023-09-26 20:11:55 -07:00
Joseph McElroy
175ef0a55d
[ElasticsearchStore] Enable custom Bulk Args (#11065)
This enables bulk args like `chunk_size` to be passed down from the
ingest methods (from_text, from_documents) to be passed down to the bulk
API.

This helps alleviate issues where bulk importing a large amount of
documents into Elasticsearch was resulting in a timeout.

Contribution Shoutout
- @elastic

- [x] Updated Integration tests

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-09-26 12:53:50 -07:00
Leonid Ganeline
21199cc7b4
📖 docs: fixed integrations/document loaders toc (#9281)
Fixed navbar:
- renamed several files, so ToC is sorted correctly
- made ToC items consistent: formatted several Titles
- added several links
- reformatted several docs to a consistent format
- renamed several files (removed `_example` suffix)
- added renamed files to the `docs/docs_skeleton/vercel.json`
2023-09-26 09:47:37 -07:00
Bagatur
0ea384d575
fix multiple chains lcel how to (#11074) 2023-09-26 08:39:02 -07:00
William FH
9c5eca92e4
Update notebook deps (#11053) 2023-09-25 22:41:29 -07:00
William FH
448426a6ac
Add collab link (#11052) 2023-09-25 22:35:25 -07:00
William FH
4aec587979
Update LangSmith Walkthrough (#11043) 2023-09-25 22:32:56 -07:00
Tomaz Bratanic
0625ab7a9e
Filtering graph schema for Cypher generation (#10577)
Sometimes you don't want the LLM to be aware of the whole graph schema,
and want it to ignore parts of the graph when it is constructing Cypher
statements.
2023-09-25 14:14:15 -07:00
Palau
89ef440c14
Kay retriever (#10657)
- **Description**: Adding retrievers for [kay.ai](https://kay.ai) and
SEC filings powered by Kay and Cybersyn. Kay provides context as a
service: it's an API built for RAG.
- **Issue**: N/A
- **Dependencies**: Just added a dep to the
[kay](https://pypi.org/project/kay/) package
- **Tag maintainer**: @baskaryan @hwchase17 Discussed in slack
- **Twtter handle:** [@vishalrohra_](https://twitter.com/vishalrohra_)

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-09-25 13:10:13 -07:00
Harrison Chase
5f13668fa0
Harrison/move vectorstore base (#11030) 2023-09-25 12:44:23 -07:00
Taqi Jaffri
b7290f01d8
Batching for hf_pipeline (#10795)
The huggingface pipeline in langchain (used for locally hosted models)
does not support batching. If you send in a batch of prompts, it just
processes them serially using the base implementation of _generate:
https://github.com/docugami/langchain/blob/master/libs/langchain/langchain/llms/base.py#L1004C2-L1004C29

This PR adds support for batching in this pipeline, so that GPUs can be
fully saturated. I updated the accompanying notebook to show GPU batch
inference.

---------

Co-authored-by: Taqi Jaffri <tjaffri@docugami.com>
2023-09-25 18:23:11 +01:00
Massimiliano Pronesti
4322b246aa
docs: add vLLM chat notebook (#10993)
This PR aims at showcasing how to use vLLM's OpenAI-compatible chat API.

### Context
Lanchain already supports vLLM and its OpenAI-compatible `Completion`
API. However, the `ChatCompletion` API was not aligned with OpenAI and
for this reason I've waited for this
[PR](https://github.com/vllm-project/vllm/pull/852) to be merged before
adding this notebook to langchain.
2023-09-24 18:23:19 -07:00
Anar
ff732e10f8
LLMRails Embedding (#10959)
LLMRails  Embedding Integration
This PR provides integration with LLMRails. Implemented here are:

langchain/embeddings/llm_rails.py
docs/extras/integrations/text_embedding/llm_rails.ipynb


Hi @hwchase17 after adding our vectorstore integration to langchain with
confirmation of you and @baskaryan, now we want to add our embedding
integration

---------

Co-authored-by: Anar Aliyev <aaliyev@mgmt.cloudnet.services>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-09-23 16:11:02 -07:00
Michael Feil
94e31647bd
Support for Gradient.ai embedding (#10968)
Adds support for gradient.ai's embedding model.

This will remain a Draft, as the code will likely be refactored with the
`pip install gradientai` python sdk.
2023-09-23 16:10:23 -07:00
Bagatur
f0408c347f
llm feat table revision (#10947) 2023-09-22 10:29:12 -07:00
Harrison Chase
9062e36722
Harrison/agents structured (#10911) 2023-09-22 10:21:23 -07:00
Bagatur
281a332784
table fix (#10944) 2023-09-22 09:37:03 -07:00
Bagatur
5336d87c15
update feat table (#10939) 2023-09-22 09:16:40 -07:00
Greg Richardson
4eee789dd3
Docs: Using SupabaseVectorStore with existing documents (#10907)
## Description
Adds additional docs on how to use `SupabaseVectorStore` with existing
data in your DB (vs inserting new documents each time).
2023-09-22 08:18:56 -07:00
Bagatur
cab55e9bc1
add vertex prod features (#10910)
- chat vertex async
- vertex stream
- vertex full generation info
- vertex use server-side stopping
- model garden async
- update docs for all the above

in follow up will add
[] chat vertex full generation info
[] chat vertex retries
[] scheduled tests
2023-09-22 01:44:09 -07:00
Bagatur
dccc20b402
add model feat table (#10921) 2023-09-22 01:10:27 -07:00
Harrison Chase
a1ade48e8f
update agent docs (#10894) 2023-09-21 09:09:33 -07:00
Stefano Lottini
40e836c67e
added Cassandra caches to the llm_caching notebook doc (#10889)
This adds a section on usage of `CassandraCache` and
`CassandraSemanticCache` to the doc notebook about caching LLMs, as
suggested in [this
comment](https://github.com/langchain-ai/langchain/pull/9772/#issuecomment-1710544100)
on a previous merged PR.

I also spotted what looks like a mismatch between different executions
and propose a fix (line 98).

Being the result of several runs, the cell execution numbers are
scrambled somewhat, so I volunteer to refine this PR by (manually)
re-numbering the cells to restore the appearance of a single, smooth
running (for the sake of orderly execution :)
2023-09-21 08:52:52 -07:00
Matvey Arye
6e02c45ca4
Add integration for Timescale Vector(Postgres) (#10650)
**Description:**
This commit adds a vector store for the Postgres-based vector database
(`TimescaleVector`).

Timescale Vector(https://www.timescale.com/ai) is PostgreSQL++ for AI
applications. It enables you to efficiently store and query billions of
vector embeddings in `PostgreSQL`:
- Enhances `pgvector` with faster and more accurate similarity search on
1B+ vectors via DiskANN inspired indexing algorithm.
- Enables fast time-based vector search via automatic time-based
partitioning and indexing.
- Provides a familiar SQL interface for querying vector embeddings and
relational data.

Timescale Vector scales with you from POC to production:
- Simplifies operations by enabling you to store relational metadata,
vector embeddings, and time-series data in a single database.
- Benefits from rock-solid PostgreSQL foundation with enterprise-grade
feature liked streaming backups and replication, high-availability and
row-level security.
- Enables a worry-free experience with enterprise-grade security and
compliance.

Timescale Vector is available on Timescale, the cloud PostgreSQL
platform. (There is no self-hosted version at this time.) LangChain
users get a 90-day free trial for Timescale Vector.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Avthar Sewrathan <avthar@timescale.com>
2023-09-21 07:33:37 -07:00
Michael Feil
55570e54e1
gradient.ai LLM intregration (#10800)
- **Description:** This PR implements a new LLM API to
https://gradient.ai
- **Issue:** Feature request for LLM #10745 
- **Dependencies**: No additional dependencies are introduced. 
- **Tag maintainer:** I am opening this PR for visibility, once ready
for review I'll tag.

- ```make format && make lint && make test``` is running.
- added a `integration` and `mock unit` test.


Co-authored-by: michaelfeil <me@michaelfeil.eu>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-09-21 07:29:16 -07:00
Harrison Chase
808caca607
beef up agent docs (#10866) 2023-09-20 23:09:58 -07:00
Sharath Rajasekar
96023f94d9
Add Javelin integration (#10275)
We are introducing the py integration to Javelin AI Gateway
www.getjavelin.io. Javelin is an enterprise-scale fast llm router &
gateway. Could you please review and let us know if there is anything
missing.

Javelin AI Gateway wraps Embedding, Chat and Completion LLMs. Uses
javelin_sdk under the covers (pip install javelin_sdk).

Author: Sharath Rajasekar, Twitter: @sharathr, @javelinai

Thanks!!
2023-09-20 16:36:39 -07:00
Harrison Chase
4074ea4c41
fix databricks docs (#10858) 2023-09-20 14:36:54 -07:00
Mukit Momin
67c5950df3
Amazon Bedrock Support Streaming (#10393)
### Description

- Add support for streaming with `Bedrock` LLM and `BedrockChat` Chat
Model.
- Bedrock as of now supports streaming for the `anthropic.claude-*` and
`amazon.titan-*` models only, hence support for those have been built.
- Also increased the default `max_token_to_sample` for Bedrock
`anthropic` model provider to `256` from `50` to keep in line with the
`Anthropic` defaults.
- Added examples for streaming responses to the bedrock example
notebooks.

**_NOTE:_**: This PR fixes the issues mentioned in #9897 and makes that
PR redundant.
2023-09-20 11:55:38 -07:00
Bagatur
095f300bf6
add lcel how to index (#10850) 2023-09-20 10:19:43 -07:00
DanielZzz
ebe08412ad
fix: chat_models Qianfan not compatiable with SystemMessage (#10642)
- **Description:** QianfanEndpoint bugs for SystemMessages. When the
`SystemMessage` is input as the messages to
`chat_models.QianfanEndpoint`. A `TypeError` will be raised.
  - **Issue:** #10643
  - **Dependencies:** 
  - **Tag maintainer:** @baskaryan
  - **Twitter handle:** no
2023-09-19 22:35:51 -07:00
Aashish Saini
7395c28455
corrected spelling (#62) (#10816) 2023-09-19 21:41:49 -07:00
zhanghexian
0abe996409
add clustered vearch in langchain (#10771)
---------

Co-authored-by: zhanghexian1 <zhanghexian1@jd.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-09-19 21:22:23 -07:00
HeTaoPKU
f505320a73
Add Minimax chat model (#10776)
resolve the merging issues for
https://github.com/langchain-ai/langchain/pull/6757

---------

Co-authored-by: 何涛 <taohe@bytedance.com>
2023-09-19 20:43:49 -07:00
Anar
c656a6b966
LLMRails (#10796)
### LLMRails Integration
This PR provides integration with LLMRails. Implemented here are:

langchain/vectorstore/llm_rails.py
tests/integration_tests/vectorstores/test_llm_rails.py
docs/extras/integrations/vectorstores/llm-rails.ipynb

---------

Co-authored-by: Anar Aliyev <aaliyev@mgmt.cloudnet.services>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-09-19 20:33:33 -07:00
Harrison Chase
5d0493f652
improve notebook (#10804) 2023-09-19 16:51:39 -07:00
Harrison Chase
d2bee34d4c
Harrison/add vald (#10807)
Co-authored-by: datelier <57349093+datelier@users.noreply.github.com>
2023-09-19 16:42:52 -07:00
Mateusz Wosinski
a29cd89923
Synthetic data generation (#9759)
### Description

Implements synthetic data generation with the fields and preferences
given by the user. Adds showcase notebook.
Corresponding prompt was proposed for langchain-hub.

### Example

```
output = chain({"fields": {"colors": ["blue", "yellow"]}, "preferences": {"style": "Make it in a style of a weather forecast."}})
print(output)

# {'fields': {'colors': ['blue', 'yellow']},
 'preferences': {'style': 'Make it in a style of a weather forecast.'},
 'text': "Good morning! Today's weather forecast brings a beautiful combination of colors to the sky, with hues of blue and yellow gently blending together like a mesmerizing painting."}
```

### Twitter handle 

@deepsense_ai @matt_wosinski

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-09-19 16:29:50 -07:00
Mateusz Wosinski
720f6dbaac
Add XMLOutputParser (#10051)
**Description**
Adds new output parser, this time enabling the output of LLM to be of an
XML format. Seems to be particularly useful together with Claude model.
Addresses [issue
9820](https://github.com/langchain-ai/langchain/issues/9820).

**Twitter handle**
@deepsense_ai @matt_wosinski
2023-09-19 16:17:33 -07:00
Bagatur
73afd72e1d
fix qa structured link (#10799)
redirect not working for some reason
2023-09-19 13:40:48 -07:00
Aashish Saini
1b050b98f5
Corrected some spelling mistakes and grammatical errors (#10791)
Corrected some spelling mistakes and grammatical errors
CC: @baskaryan, @eyurtsev, @hwchase17.

---------

Co-authored-by: Ishita Chauhan <136303787+IshitaChauhanShortHillsAI@users.noreply.github.com>
Co-authored-by: Aashish Saini <141953346+AashishSainiShorthillsAI@users.noreply.github.com>
Co-authored-by: ManpreetShorthillsAI <142380984+ManpreetShorthillsAI@users.noreply.github.com>
Co-authored-by: AryamanJaiswalShorthillsAI <142397527+AryamanJaiswalShorthillsAI@users.noreply.github.com>
Co-authored-by: Adarsh Shrivastav <142413097+AdarshKumarShorthillsAI@users.noreply.github.com>
Co-authored-by: Vishal <141389263+VishalYadavShorthillsAI@users.noreply.github.com>
Co-authored-by: ChetnaGuptaShorthillsAI <142381084+ChetnaGuptaShorthillsAI@users.noreply.github.com>
Co-authored-by: PankajKumarShorthillsAI <142473460+PankajKumarShorthillsAI@users.noreply.github.com>
Co-authored-by: AbhishekYadavShorthillsAI <142393903+AbhishekYadavShorthillsAI@users.noreply.github.com>
Co-authored-by: AmitSinghShorthillsAI <142410046+AmitSinghShorthillsAI@users.noreply.github.com>
Co-authored-by: Md Nazish Arman <142379599+MdNazishArmanShorthillsAI@users.noreply.github.com>
Co-authored-by: KamalSharmaShorthillsAI <142474019+KamalSharmaShorthillsAI@users.noreply.github.com>
Co-authored-by: Lakshya <lakshyagupta87@yahoo.com>
Co-authored-by: Aayush <142384656+AayushShorthillsAI@users.noreply.github.com>
Co-authored-by: AnujMauryaShorthillsAI <142393269+AnujMauryaShorthillsAI@users.noreply.github.com>
Co-authored-by: ishita <chauhanishita5356@gmail.com>
2023-09-19 10:08:59 -07:00
Raunak Chowdhuri
b338e492fc
Remembrall Integration (#10767)
- **Description:** Added integration instructions for Remembrall. 
  - **Tag maintainer:** @hwchase17 
  - **Twitter handle:** @raunakdoesdev

Fun fact, this project originated at the Modal Hackathon in NYC where it
won the Best LLM App prize sponsored by Langchain. Thanks for your
support 🦜
2023-09-19 08:36:32 -07:00
Aashish Saini
6a98974bd0
Update argilla.ipynb with spelling fix (#10611)
Fixed spelling of **responses** and removed extra "the"
2023-09-19 08:06:28 -07:00
Jacob Lee
71025013f8
Update routing cookbook to include a RunnableBranch example (#10754)
~~Because we can't pass extra parameters into a prompt, we have to
prepend a function before the runnable calls in the branch and it's a
bit less elegant than I'd like.~~

All good now that #10765 has landed!

@eyurtsev @hwchase17

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-09-19 07:59:54 -07:00
Taqi Jaffri
54763a61f8
fix broken link in docugami loader docs (#10753)
Just fixing the link to the self query retriever in docugami loader docs

Co-authored-by: Taqi Jaffri <tjaffri@docugami.com>
2023-09-18 21:56:33 -07:00
Bagatur
4c80978ec6
mv data bricks sql page (#10748) 2023-09-18 14:54:41 -07:00
Harrison Chase
e404fd39dd
add anthropic page (#10666) 2023-09-18 11:10:44 -07:00
Jiayi Ni
ce61840e3b
ENH: Add llm_kwargs for Xinference LLMs (#10354)
- This pr adds `llm_kwargs` to the initialization of Xinference LLMs
(integrated in #8171 ).
- With this enhancement, users can not only provide `generate_configs`
when calling the llms for generation but also during the initialization
process. This allows users to include custom configurations when
utilizing LangChain features like LLMChain.
- It also fixes some format issues for the docstrings.
2023-09-18 11:36:29 -04:00