Commit Graph

3638 Commits (642b57c7ff48970e392ed31dda2808ab5dcf1e9a)
 

Author SHA1 Message Date
Harrison Chase 412fa4e1db
add guide notebook (#8258)
<!-- Thank you for contributing to LangChain!

Replace this comment with:
  - Description: a description of the change, 
  - Issue: the issue # it fixes (if applicable),
  - Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!

Please make sure you're PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
  2. an example notebook showing its use.

Maintainer responsibilities:
  - General / Misc / if you don't know who to tag: @baskaryan
  - DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
  - Models / Prompts: @hwchase17, @baskaryan
  - Memory: @hwchase17
  - Agents / Tools / Toolkits: @hinthornw
  - Tracing / Callbacks: @agola11
  - Async: @agola11

If no one reviews your PR within a few days, feel free to @-mention the
same people again.

See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
 -->

---------

Co-authored-by: Nuno Campos <nuno@boringbits.io>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
1 year ago
William FH b7c0eb9ecb
Wfh/ref links (#8454) 1 year ago
Harrison Chase 13b4f465e2
log output parser (#8446) 1 year ago
William FH 7d79178827
Wfh/update guide imports (#8452) 1 year ago
William FH d935573362
Partial formatting for chat messages (#8450) 1 year ago
William FH 3314f54383
Update supabase docstrings (#8443) 1 year ago
Harrison Chase f63240649c cr 1 year ago
Harrison Chase 17953ab61f
add notebook for sql query (#8442) 1 year ago
Harrison Chase 2448043b84
bump and fix (#8441) 1 year ago
Zack Proser 3892cefac6
Minor fixes to enhance notebook usability: (#8389)
- Install langchain
- Set Pinecone API key and environment as env vars
- Create Pinecone index if it doesn't already exist
---
- Description: Fix a couple minor issues I came across when running this
notebook,
  - Issue: the issue # it fixes (if applicable),
  - Dependencies: none,
  - Tag maintainer: @rlancemartin @eyurtsev,
  - Twitter handle: @zackproser (certainly not necessary!)
1 year ago
Amélie 8ee56b9a5b
Feature: Add support for meilisearch vectorstore (#7649)
**Description:**

Add support for Meilisearch vector store.
Resolve #7603 

- No external dependencies added
- A notebook has been added

@rlancemartin

https://twitter.com/meilisearch

Co-authored-by: Bagatur <baskaryan@gmail.com>
1 year ago
Bearnardd b7d6e1909c
fix empty ids when metadatas is provided (#8127)
Fixes https://github.com/hwchase17/langchain/issues/7865 and
https://github.com/hwchase17/langchain/issues/8061

- [x] fixes returning empty ids when metadatas argument is provided

@baskaryan

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
1 year ago
Bharat Raghunathan 62b8b459c6
doc(prompts): Add redirect to fix broken link on Prompts Page (#8408)
---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
1 year ago
Bagatur 2311d57df4
mv dropbox (#8438) 1 year ago
Luis Valencia 7124377524
Devcontainer README -> Clarification. (#8414)
- Description: The contribution guidlelines using devcontainer refer to
the main repo and not the forked repo. We should create our changes in
our own forked repo, not on langchain/main
  - Issue: Just documentation
  - Dependencies: N/A,
  - Tag maintainer: @baskaryan
  - Twitter handle: @levalencia
1 year ago
lvisdd abe4c361f9
update get_num_tokens_from_messages model (#8431)
(#8430)

Co-authored-by: Kano Kunihiko <kkano@heroz.co.jp>
1 year ago
Jeffrey Wang e0de62f6da
Add RoPE Scaling params from llamacpp (#8422)
Description:
Just adding parameters from `llama-python-cpp` that support RoPE
scaling.
@hwchase17, @baskaryan

sources:
papers and explanation:
https://kaiokendev.github.io/context
llamacpp conversation:
https://github.com/ggerganov/llama.cpp/discussions/1965 
Supports models like:
https://huggingface.co/conceptofmind/LLongMA-2-13b
1 year ago
Bagatur 2db2987b1b
add experimental ref (#8435) 1 year ago
Harrison Chase fab24457bc
remove code (#8425) 1 year ago
Harrison Chase 3a78450883
update experimental (#8402)
some changes were made to experimental, porting them over
1 year ago
Harrison Chase af7e70d4af
expose function for converting messages to messages (#8426) 1 year ago
Eugene Yurtsev 06bdbe06fe
PromptTemplate update documentation and expand kwarg (#8423)
# PromptTemplate

* Update documentation to highlight the classmethod for instantiating a
prompt template.
* Expand kwargs in the classmethod to make parameters easier to discover

This PR got reverted here:
https://github.com/langchain-ai/langchain/pull/8395/files
1 year ago
Eugene Yurtsev e62a1686e2
ChatPromptTemplate: minor fix in doc string (#8424)
Minor fix in doc-string to use `ai` rather than `assistant`
1 year ago
Eugene Yurtsev 760c278fe0
ChatPromptTemplate: Expand support for message formats and documentation (#8244)
* Expands support for a variety of message formats in the
`from_messages` classmethod. Ideally, we could deprecate the other
on-ramps to reduce the amount of classmethods users need to know about.
* Expand documentation with code examples.
1 year ago
Bagatur 61dd92f821
bump 246 (#8410) 1 year ago
Harrison Chase 394b67ab92
add kwargs to llm runnables (#8388) 1 year ago
HeTaoPKU d5884017a9
Add Minimax llm model to langchain (#7645)
- Description: Minimax is a great AI startup from China, recently they
released their latest model and chat API, and the API is widely-spread
in China. As a result, I'd like to add the Minimax llm model to
Langchain.
- Tag maintainer: @hwchase17, @baskaryan

---------

Co-authored-by: the <tao.he@hulu.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
1 year ago
James Campbell 0ad2d5f27a
[nit] Add default value for ChatOpenAI client (#7939)
Micro convenience PR to avoid warning regarding missing `client`
parameter. It is always set during initialization.

@baskaryan

Co-authored-by: Bagatur <baskaryan@gmail.com>
1 year ago
Harrison Chase 82df923f37 Merge branch 'master' of github.com:hwchase17/langchain 1 year ago
Harrison Chase 1b0bfa54cf cr 1 year ago
Jeff Vestal c7ff5f19a8
ElasticKnnSearch rewrite - bug fix - return Document (#8180)
Fixes: 
https://github.com/hwchase17/langchain/issues/7117
https://github.com/hwchase17/langchain/issues/5760

Adding back `create_index` , `add_texts`, `from_texts` to
ElasticKnnSearch

`from_texts` matches standard `from_texts` methods as quick start up
method

`knn_search` and `hybrid_result` return a list of [`Document()`,
`score`,]

# Test `from_texts` for quick start
```
# create new index using from_text

from langchain.vectorstores.elastic_vector_search import ElasticKnnSearch
from langchain.embeddings import ElasticsearchEmbeddings

model_id = "sentence-transformers__all-distilroberta-v1" 
dims = 768
es_cloud_id = ""
es_user = ""
es_password = ""
test_index = "knn_test_index_305"

embeddings = ElasticsearchEmbeddings.from_credentials(
    model_id,
    #input_field=input_field,
    es_cloud_id=es_cloud_id,
    es_user=es_user,
    es_password=es_password,
)

# add texts and create class instance
texts = ["This is a test document", "This is another test document"]
knnvectorsearch = ElasticKnnSearch.from_texts(
    texts=texts,
    embedding=embeddings,
    index_name= test_index,
    vector_query_field='vector',
    query_field='text',
    model_id=model_id,
    dims=dims,
	es_cloud_id=es_cloud_id, 
	es_user=es_user, 
	es_password=es_password
)

# Test `add_texts` method
texts2 = ["Hello, world!", "Machine learning is fun.", "I love Python."]
knnvectorsearch.add_texts(texts2)

query = "Hello"
knn_result = knnvectorsearch.knn_search(query = query, model_id= model_id, k=2)

hybrid_result = knnvectorsearch.knn_hybrid_search(query = query, model_id= model_id, k=2)

```

The  mapping is as follows:
```
{
  "knn_test_index_012": {
    "mappings": {
      "properties": {
        "text": {
          "type": "text"
        },
        "vector": {
          "type": "dense_vector",
          "dims": 768,
          "index": true,
          "similarity": "dot_product"
        }
      }
    }
  }
}
```

# Check response type
```
>>> hybrid_result
[(Document(page_content='Hello, world!', metadata={}), 0.94232327), (Document(page_content='I love Python.', metadata={}), 0.5321523)]

>>> hybrid_result[0]
(Document(page_content='Hello, world!', metadata={}), 0.94232327)

>>> hybrid_result[0][0]
Document(page_content='Hello, world!', metadata={})

>>> type(hybrid_result[0][0])
<class 'langchain.schema.document.Document'>
```

# Test with existing Index
```
from langchain.vectorstores.elastic_vector_search import ElasticKnnSearch
from langchain.embeddings import ElasticsearchEmbeddings

## Initialize ElasticsearchEmbeddings
model_id = "sentence-transformers__all-distilroberta-v1" 
dims = 768
es_cloud_id = 
es_user = ""
es_password = ""
test_index = "knn_test_index_012"

embeddings = ElasticsearchEmbeddings.from_credentials(
    model_id,
    es_cloud_id=es_cloud_id,
    es_user=es_user,
    es_password=es_password,
)

## Initialize ElasticKnnSearch
knn_search = ElasticKnnSearch(
	es_cloud_id=es_cloud_id, 
	es_user=es_user, 
	es_password=es_password, 
	index_name= test_index, 
	embedding= embeddings
)


## Test adding vectors

### Test `add_texts` method when index created
texts = ["Hello, world!", "Machine learning is fun.", "I love Python."]
knn_search.add_texts(texts)

```

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
1 year ago
Harrison Chase a221a9ced0
Harrison/sql query (#8370)
Co-authored-by: Nuno Campos <nuno@boringbits.io>
1 year ago
Bagatur a1a650c743
Bagatur/from texts bug fix (#8394)
---------

Co-authored-by: Davit Buniatyan <davit@loqsh.com>
Co-authored-by: Davit Buniatyan <d@activeloop.ai>
Co-authored-by: adilkhan <adilkhan.sarsen@nu.edu.kz>
Co-authored-by: Ivo Stranic <istranic@gmail.com>
1 year ago
Jiayi Ni 1efb9bae5f
FEAT: Integrate Xinference LLMs and Embeddings (#8171)
- [Xorbits
Inference(Xinference)](https://github.com/xorbitsai/inference) is a
powerful and versatile library designed to serve language, speech
recognition, and multimodal models. Xinference supports a variety of
GGML-compatible models including chatglm, whisper, and vicuna, and
utilizes heterogeneous hardware and a distributed architecture for
seamless cross-device and cross-server model deployment.
- This PR integrates Xinference models and Xinference embeddings into
LangChain.
- Dependencies: To install the depenedencies for this integration, run
    
    `pip install "xinference[all]"`
    
- Example Usage:

To start a local instance of Xinference, run `xinference`.

To deploy Xinference in a distributed cluster, first start an Xinference
supervisor using `xinference-supervisor`:

`xinference-supervisor -H "${supervisor_host}"`

Then, start the Xinference workers using `xinference-worker` on each
server you want to run them on.

`xinference-worker -e "http://${supervisor_host}:9997"`

To use Xinference with LangChain, you also need to launch a model. You
can use command line interface (CLI) to do so. Fo example: `xinference
launch -n vicuna-v1.3 -f ggmlv3 -q q4_0`. This launches a model named
vicuna-v1.3 with `model_format="ggmlv3"` and `quantization="q4_0"`. A
model UID is returned for you to use.

Now you can use Xinference with LangChain:

```python
from langchain.llms import Xinference

llm = Xinference(
    server_url="http://0.0.0.0:9997", # suppose the supervisor_host is "0.0.0.0"
    model_uid = {model_uid} # model UID returned from launching a model
)

llm(
    prompt="Q: where can we visit in the capital of France? A:",
    generate_config={"max_tokens": 1024},
)
```

You can also use RESTful client to launch a model:
```python
from xinference.client import RESTfulClient

client = RESTfulClient("http://0.0.0.0:9997")

model_uid = client.launch_model(model_name="vicuna-v1.3", model_size_in_billions=7, quantization="q4_0")
```

The following code block demonstrates how to use Xinference embeddings
with LangChain:
```python
from langchain.embeddings import XinferenceEmbeddings

xinference = XinferenceEmbeddings(
    server_url="http://0.0.0.0:9997",
    model_uid = model_uid
)
```

```python
query_result = xinference.embed_query("This is a test query")
```

```python
doc_result = xinference.embed_documents(["text A", "text B"])
```

Xinference is still under rapid development. Feel free to [join our
Slack
community](https://xorbitsio.slack.com/join/shared_invite/zt-1z3zsm9ep-87yI9YZ_B79HLB2ccTq4WA)
to get the latest updates!

- Request for review: @hwchase17, @baskaryan
- Twitter handle: https://twitter.com/Xorbitsio

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
1 year ago
Bagatur 877d384bc9
Revert "PromptTemplate update documentation and expand kwargs (#8234)" (#8395)
fyi @eyurtsev was failing a unit test
1 year ago
Gordon Clark e66759cc9d
Github add "Create PR" tool + Docs update (#8235)
Added a new tool to the Github toolkit called **Create Pull Request.**
Now we can make our own langchain contributor in langchain 😁

In order to have somewhere to pull from, I also added a new env var,
"GITHUB_BASE_BRANCH." This will allow the existing env var,
"GITHUB_BRANCH," to be a working branch for the bot (so that it doesn't
have to always commit on the main/master). For example, if you want the
bot to work in a branch called `bot_dev` and your repo base is `main`,
you would set up the vars like:
```
GITHUB_BASE_BRANCH = "main"
GITHUB_BRANCH = "bot_dev"
``` 

Maintainer responsibilities:
  - Agents / Tools / Toolkits: @hinthornw
1 year ago
William FH ecd4aae818
Few Shot Chat Prompt (#8038)
Proposal for a few shot chat message example selector

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
1 year ago
Eugene Yurtsev 6dd18eee26
PromptTemplate update documentation and expand kwargs (#8234)
# PromptTemplate

* Update documentation to highlight the classmethod for instantiating a
prompt template.
* Expand kwargs in the classmethod to make parameters easier to discover
1 year ago
Karan V a003a0baf6
fix(petals) allows to run models that aren't Bloom (Support for LLama and newer models) (#8356)
In this PR:

- Removed restricted model loading logic for Petals-Bloom
- Removed petals imports (DistributedBloomForCausalLM,
BloomTokenizerFast)
- Instead imported more generalized versions of loader
(AutoDistributedModelForCausalLM, AutoTokenizer)
- Updated the Petals example notebook to allow for a successful
installation of Petals in Apple Silicon Macs

- Tag maintainer: @hwchase17, @baskaryan

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
1 year ago
lars.gersmann e758e9e7f5
fix(openapi): openapi chain will work without/empty description/summa… (#8351)
Description: 

This PR will enable the Open API chain to work with valid Open API
specifications missing `description` and `summary` properties for path
and operation nodes in open api specs.

Since both `description` and `summary` property are declared optional we
cannot be sure they are defined. This PR resolves this problem by
providing an empty (`''`) description as fallback.

The previous behavior of the Open API chain was that the underlying LLM
(OpenAI) throw ed an exception since `None` is not of type string:

```
openai.error.InvalidRequestError: None is not of type 'string' - 'functions.0.description'
```

Using this PR the Open API chain will succeed also using Open API specs
lacking `description` and `summary` properties for path and operation
nodes.

Thanks for your amazing work !

Tag maintainer: @baskaryan

---------

Co-authored-by: Lars Gersmann <lars.gersmann@cm4all.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
1 year ago
ljeagle caa6caeb8a
Upgrade the AwaDB from v0.3.7 to v0.3.9 and change the default embeddings (#8281)
1. Upgrade the AwaDB from v0.3.7 to v0.3.9
2. Change the default embedding to AwaEmbedding

---------

Co-authored-by: ljeagle <awadb.vincent@gmail.com>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
1 year ago
Harrison Chase 25b8cc7e3d
Harrison/update memory docs (#8384)
Co-authored-by: Bagatur <baskaryan@gmail.com>
1 year ago
Holt Skinner d7e6770de8
refactor: Code refactoring & simplification for Google Cloud Enterprise Search retriever (#8369)
Followup to https://github.com/langchain-ai/langchain/pull/7857

- Changes `_convert_search_response()` to use object attributes instead
of converting to dictionary
- Simplifies logic for readability
1 year ago
Taozhi Wang 594f195e54
Add embeddings for AwaEmbedding (#8353)
- Description: Adds AwaEmbeddings class for embeddings, which provides
users with a convenient way to do fine-tuning, as well as the potential
need for multimodality

  - Tag maintainer: @baskaryan

Create `Awa.ipynb`: an example notebook for AwaEmbeddings class
Modify `embeddings/__init__.py`: Import the class
Create `embeddings/awa.py`: The embedding class
Create `embeddings/test_awa.py`: The test file.

---------

Co-authored-by: taozhiwang <taozhiwa@gmail.com>
1 year ago
thehunmonkgroup ba4e82bb47
fix missing _identifying_params() in _VertexAICommon (#8303)
Full set of params are missing from Vertex* LLMs when `dict()` method is
called.

```
>>> from langchain.chat_models.vertexai import ChatVertexAI
>>> from langchain.llms.vertexai import VertexAI
>>> chat_llm = ChatVertexAI()
l>>> llm = VertexAI()
>>> chat_llm.dict()
{'_type': 'vertexai'}
>>> llm.dict()
{'_type': 'vertexai'}
```

This PR just uses the same mechanism used elsewhere to expose the full
params.

Since `_identifying_params()` is on the `_VertexAICommon` class, it
should cover the chat and non-chat cases.
1 year ago
bheroder dc3ca44e05
Add an example for azure ml managed feature store (#8324)
We are adding an example of how one can connect to azure ml managed
feature store and use such a prompt template in a llm chain. @baskaryan
1 year ago
Caitlin2694 b2e4b9dca4
Fix exception caused by restrictions in OWL (#8341)
Description: Fix exception caused by restrictions in OWL
Issue: #8331
Dependencies: none
Maintainer: @baskaryan
1 year ago
Harrison Chase cddd8ae83d
update release yml (#8364)
only do the step that tags and adds release notes if its langchain
1 year ago
Nikita Pokidyshev f499e6ea6a
Add FunctionMessage to _message_from_dict (#8374)
<!-- Thank you for contributing to LangChain!

Replace this comment with:
  - Description: a description of the change, 
  - Issue: the issue # it fixes (if applicable),
  - Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!

Please make sure you're PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
  2. an example notebook showing its use.

Maintainer responsibilities:
  - General / Misc / if you don't know who to tag: @baskaryan
  - DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
  - Models / Prompts: @hwchase17, @baskaryan
  - Memory: @hwchase17
  - Agents / Tools / Toolkits: @hinthornw
  - Tracing / Callbacks: @agola11
  - Async: @agola11

If no one reviews your PR within a few days, feel free to @-mention the
same people again.

See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
 -->
1 year ago
evelynmitchell 539574670c
Update tot.ipynb (#8387)
Spelling error fix

<!-- Thank you for contributing to LangChain!

Replace this comment with:
  - Description: a description of the change, 
  - Issue: the issue # it fixes (if applicable),
  - Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!

Please make sure you're PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
  2. an example notebook showing its use.

Maintainer responsibilities:
  - General / Misc / if you don't know who to tag: @baskaryan
  - DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
  - Models / Prompts: @hwchase17, @baskaryan
  - Memory: @hwchase17
  - Agents / Tools / Toolkits: @hinthornw
  - Tracing / Callbacks: @agola11
  - Async: @agola11

If no one reviews your PR within a few days, feel free to @-mention the
same people again.

See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
 -->
1 year ago