<!-- Thank you for contributing to LangChain!
Replace this entire comment with:
- **Description:** a description of the change,
- **Issue:** the issue # it fixes (if applicable),
- **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!
Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.
If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
-->
Small fix to _summarization_ example, `reduce_template` should use
`{docs}` variable.
Bug likely introduced as following code suggests using
`hub.pull("rlm/map-prompt")` instead of defined prompt.
### Description:
Hey 👋🏽 this is a small docs example fix. Hoping it helps future developers who are working with Langchain.
### Problem:
Take a look at the original example code. You were not able to get the `dialogue_turn[0]` while it was a tuple.
Original code:
```python
def _format_chat_history(chat_history: List[Tuple]) -> str:
buffer = ""
for dialogue_turn in chat_history:
human = "Human: " + dialogue_turn[0]
ai = "Assistant: " + dialogue_turn[1]
buffer += "\n" + "\n".join([human, ai])
return buffer
```
In the original code you were getting this error:
```bash
human = "Human: " + dialogue_turn[0].content
~~~~~~~~~~~~~^^^
TypeError: 'HumanMessage' object is not subscriptable
```
### Solution:
The fix is to just for loop over the chat history and look to see if its a human or ai message and add it to the buffer.
The `integrations/vectorstores/matchingengine.ipynb` example has the
"Google Vertex AI Vector Search" title. This place this Title in the
wrong order in the ToC (it is sorted by the file name).
- Renamed `integrations/vectorstores/matchingengine.ipynb` into
`integrations/vectorstores/google_vertex_ai_vector_search.ipynb`.
- Updated a correspondent comment in docstring
- Rerouted old URL to a new URL
---------
Co-authored-by: Erick Friis <erick@langchain.dev>
Addressed this issue with the top menu: It allocates too much space. If the screen is small, then the top menu items are split into two lines and look unreadable.
Another issue is with several top menu items: "Chat our docs" and "Also by LangChain". They are compound of several words which also hurts readability. The top menu items should be 1-word size.
Updates:
- "Chat our docs" -> "Chat" (the meaning is clean after clicking/opening the item)
- "Also by LangChain" -> "🦜️🔗"
- "🦜️🔗" moved before "Chat" item. This new item is partially copied from the first left item, the "🦜️🔗 LangChain". This design (with two 🦜️🔗 elements, visually splits the top menu into two parts. The first item in each part holds the 🦜️🔗 symbols and, when we click the second 🦜️🔗 item, it opens the drop-down menu. So, we've got two visually similar parts, which visually split the top menu on the right side: the LangChain Docs (and Doc-related items) and the lift side: other LangChain.ai (company) products/docs.
There are the following main changes in this PR:
1. Rewrite of the DocugamiLoader to not do any XML parsing of the DGML
format internally, and instead use the `dgml-utils` library we are
separately working on. This is a very lightweight dependency.
2. Added MMR search type as an option to multi-vector retriever, similar
to other retrievers. MMR is especially useful when using Docugami for
RAG since we deal with large sets of documents within which a few might
be duplicates and straight similarity based search doesn't give great
results in many cases.
We are @docugami on twitter, and I am @tjaffri
---------
Co-authored-by: Taqi Jaffri <tjaffri@docugami.com>
- **Description:** Adds a retriever implementation for [Knowledge Bases
for Amazon Bedrock](https://aws.amazon.com/bedrock/knowledge-bases/), a
new service announced at AWS re:Invent, shortly before this PR was
opened. This depends on the `bedrock-agent-runtime` service, which will
be included in a future version of `boto3` and of `botocore`. We will
open a follow-up PR documenting the minimum required versions of `boto3`
and `botocore` after that information is available.
- **Issue:** N/A
- **Dependencies:** `boto3>=1.33.2, botocore>=1.33.2`
- **Tag maintainer:** @baskaryan
- **Twitter handles:** `@pjain7` `@dead_letter_q`
This PR includes a documentation notebook under
`docs/docs/integrations/retrievers`, which I (@dlqqq) have verified
independently.
EDIT: `bedrock-agent-runtime` service is now included in
`boto3>=1.33.2`:
5cf793f493
---------
Co-authored-by: Piyush Jain <piyushjain@duck.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
Co-authored-by: Bagatur <baskaryan@gmail.com>
- **Description:** dead link replacement
- **Issue:** no open issue
**Note:**
Hi langchain team,
Sorry to open a PR for this concern but we realized that one of the
links present in the documentation booklet was broken 😄
- **Description:** Reduce image asset file size used in documentation by
running them via lossless image optimization
([tinypng](https://www.npmjs.com/package/tinypng-cli) was used in this
case). Images wider than 1916px (the maximum width of an image displayed
in documentation) where downsized.
- **Issue:** No issue is created for this, but the large image file
assets caused slow documentation load times
- **Dependencies:** No dependencies affected
- **Description:** Existing model used for Prompt Injection is quite
outdated but we fine-tuned and open-source a new model based on the same
model deberta-v3-base from Microsoft -
[laiyer/deberta-v3-base-prompt-injection](https://huggingface.co/laiyer/deberta-v3-base-prompt-injection).
It supports more up-to-date injections and less prone to
false-positives.
- **Dependencies:** No
- **Tag maintainer:** -
- **Twitter handle:** @alex_yaremchuk
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
Current docs for adapters are in the `Guides/Adapters which is not a
good place.
- moved Adapters into `Integratons/Components/Adapters/
- simplified the OpenAI adapter notebook
- rerouted the old OpenAI adapter page URL to a new one.
**Description:**
This PR adds Databricks Vector Search as a new vector store in
LangChain.
- [x] Add `DatabricksVectorSearch` in `langchain/vectorstores/`
- [x] Unit tests
- [x] Add
[`databricks-vectorsearch`](https://pypi.org/project/databricks-vectorsearch/)
as a new optional dependency
We ran the following checks:
- `make format` passed ✅
- `make lint` failed but the failures were caused by other files
+ Files touched by this PR passed the linter ✅
- `make test` passed ✅
- `make coverage` failed but the failures were caused by other files.
Tests added by or related to this PR all passed
+ langchain/vectorstores/databricks_vector_search.py test coverage 94% ✅
- `make spell_check` passed ✅
The example notebook and updates to the [provider's documentation
page](https://github.com/langchain-ai/langchain/blob/master/docs/docs/integrations/providers/databricks.md)
will be added later in a separate PR.
**Dependencies:**
Optional dependency:
[`databricks-vectorsearch`](https://pypi.org/project/databricks-vectorsearch/)
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
- **Description:** Added a retriever for the Outline API to ask
questions on knowledge base
- **Issue:** resolves#11814
- **Dependencies:** None
- **Tag maintainer:** @baskaryan
- **Description:**
I encountered an issue while running the existing sample code on the
page https://python.langchain.com/docs/modules/agents/how_to/agent_iter
in an environment with Pydantic 2.0 installed. The following error was
triggered:
```python
ValidationError Traceback (most recent call last)
<ipython-input-12-2ffff2c87e76> in <cell line: 43>()
41
42 tools = [
---> 43 Tool(
44 name="GetPrime",
45 func=get_prime,
2 frames
/usr/local/lib/python3.10/dist-packages/pydantic/v1/main.py in __init__(__pydantic_self__, **data)
339 values, fields_set, validation_error = validate_model(__pydantic_self__.__class__, data)
340 if validation_error:
--> 341 raise validation_error
342 try:
343 object_setattr(__pydantic_self__, '__dict__', values)
ValidationError: 1 validation error for Tool
args_schema
subclass of BaseModel expected (type=type_error.subclass; expected_class=BaseModel)
```
I have made modifications to the example code to ensure it functions
correctly in environments with Pydantic 2.0.
This PR provides idiomatic implementations for the exact-match and the
semantic LLM caches using Astra DB as backend through the database's
HTTP JSON API. These caches require the `astrapy` library as dependency.
Comes with integration tests and example usage in the `llm_cache.ipynb`
in the docs.
@baskaryan this is the Astra DB counterpart for the Cassandra classes
you merged some time ago, tagging you for your familiarity with the
topic. Thank you!
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
This PR adds a chat message history component that uses Astra DB for
persistence through the JSON API.
The `astrapy` package is required for this class to work.
I have added tests and a small notebook, and updated the relevant
references in the other docs pages.
(@rlancemartin this is the counterpart of the Cassandra equivalent class
you so helpfully reviewed back at the end of June)
Thank you!
- **Description:** Fix typo in MongoDB memory docs
- **Tag maintainer:** @eyurtsev
<!-- Thank you for contributing to LangChain!
- **Description:** Fix typo in MongoDB memory docs
- **Issue:** the issue # it fixes (if applicable),
- **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** @baskaryan
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!
Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.
If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
-->
- **Description:** This change adds an agent to the Azure Cognitive
Services toolkit for identifying healthcare entities
- **Dependencies:** azure-ai-textanalytics (Optional)
---------
Co-authored-by: James Beck <James.Beck@sa.gov.au>
Co-authored-by: Bagatur <baskaryan@gmail.com>
**Description:**
This commit adds embedchain retriever along with tests and docs.
Embedchain is a RAG framework to create data pipelines.
**Twitter handle:**
- [Taranjeet's twitter](https://twitter.com/taranjeetio) and
[Embedchain's twitter](https://twitter.com/embedchain)
**Reviewer**
@hwchase17
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
**Description:**
Enhance the functionality of YoutubeLoader to enable the translation of
available transcripts by refining the existing logic.
**Issue:**
Encountering a problem with YoutubeLoader (#13523) where the translation
feature is not functioning as expected.
Tag maintainers/contributors who might be interested:
@eyurtsev
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
## Update 2023-09-08
This PR now supports further models in addition to Lllama-2 chat models.
See [this comment](#issuecomment-1668988543) for further details. The
title of this PR has been updated accordingly.
## Original PR description
This PR adds a generic `Llama2Chat` model, a wrapper for LLMs able to
serve Llama-2 chat models (like `LlamaCPP`,
`HuggingFaceTextGenInference`, ...). It implements `BaseChatModel`,
converts a list of chat messages into the [required Llama-2 chat prompt
format](https://huggingface.co/blog/llama2#how-to-prompt-llama-2) and
forwards the formatted prompt as `str` to the wrapped `LLM`. Usage
example:
```python
# uses a locally hosted Llama2 chat model
llm = HuggingFaceTextGenInference(
inference_server_url="http://127.0.0.1:8080/",
max_new_tokens=512,
top_k=50,
temperature=0.1,
repetition_penalty=1.03,
)
# Wrap llm to support Llama2 chat prompt format.
# Resulting model is a chat model
model = Llama2Chat(llm=llm)
messages = [
SystemMessage(content="You are a helpful assistant."),
MessagesPlaceholder(variable_name="chat_history"),
HumanMessagePromptTemplate.from_template("{text}"),
]
prompt = ChatPromptTemplate.from_messages(messages)
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
chain = LLMChain(llm=model, prompt=prompt, memory=memory)
# use chat model in a conversation
# ...
```
Also part of this PR are tests and a demo notebook.
- Tag maintainer: @hwchase17
- Twitter handle: `@mrt1nz`
---------
Co-authored-by: Erick Friis <erick@langchain.dev>
The original notebook has the `faiss` title which is duplicated in
the`faiss.jpynb`. As a result, we have two `faiss` items in the
vectorstore ToC. And the first item breaks the searching order (it is
placed between `A...` items).
- I updated title to `Asynchronous Faiss`.
- Fixed titles for two notebooks. They were inconsistent with other
titles and clogged ToC.
- Added `Upstash` description and link
- Moved the authentication text up in the `Elasticsearch` nb, right
after package installation. It was on the end of the page which was a
wrong place.