Arize released a new Generative LLM Model Type, adjusting the callback
function to new logging.
Added arize imports, please delete if not necessary.
Specifically, this change makes sure that the prompt and response pairs
from LangChain agents are logged into Arize as a Generative LLM model,
instead of our previous categorical model. In order to do this, the
callback functions collects the necessary data and passes the data into
Arize using Python Pandas SDK.
Arize library, specifically pandas.logger is an additional dependency.
Notebook For Test:
https://docs.arize.com/arize/resources/integrations/langchain
Who can review?
Tag maintainers/contributors who might be interested:
@hwchase17 - project lead
Tracing / Callbacks
@agola11
- return raw and full output (but keep run shortcut method functional)
- change output parser to take in generations (good for working with
messages)
- add output parser to base class, always run (default to same as
current)
---------
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
#### Before submitting
Add memory support for `OpenAIFunctionsAgent` like
`StructuredChatAgent`.
#### Who can review?
@hwchase17
---------
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
A must-include for SiteMap Loader to avoid the SSL verification error.
Setting the 'verify' to False by ``` sitemap_loader.requests_kwargs =
{"verify": False}``` does not bypass the SSL verification in some
websites.
There are websites (https:// researchadmin.asu.edu/ sitemap.xml) where
setting "verify" to False as shown below would not work:
sitemap_loader.requests_kwargs = {"verify": False}
We need this merge to tell the Session to use a connector with a
specific argument about SSL:
\# For SiteMap SSL verification
if not self.request_kwargs['verify']:
connector = aiohttp.TCPConnector(ssl=False)
else:
connector = None
<!--
Thank you for contributing to LangChain! Your PR will appear in our
release under the title you set. Please make sure it highlights your
valuable contribution.
Replace this with a description of the change, the issue it fixes (if
applicable), and relevant context. List any dependencies required for
this change.
After you're done, someone will review your PR. They may suggest
improvements. If no one reviews your PR within a few days, feel free to
@-mention the same people again, as notifications can get lost.
Finally, we'd love to show appreciation for your contribution - if you'd
like us to shout you out on Twitter, please also include your handle!
-->
Fixes#5483
#### Before submitting
<!-- If you're adding a new integration, please include:
1. a test for the integration - favor unit tests that does not rely on
network access.
2. an example notebook showing its use
See contribution guidelines for more information on how to write tests,
lint
etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->
#### Who can review?
Tag maintainers/contributors who might be interested:
@hwchase17
@eyurtsev
---------
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
@agola11
Issue
#6193
I added the new pricing for the new models.
Also, now gpt-3.5-turbo got split into "input" and "output" pricing. It
currently does not support that.
can't pass system_message argument, the prompt always show default
message "System: You are a helpful AI assistant."
```
system_message = SystemMessage(
content="You are an AI that provides information to Human regarding documentation."
)
agent = initialize_agent(
tools,
llm=openai_llm_chat,
agent=AgentType.OPENAI_FUNCTIONS,
system_message=system_message,
agent_kwargs={
"system_message": system_message,
},
verbose=False,
)
```
#### Who can review?
Tag maintainers/contributors who might be interested:
<!-- For a quicker response, figure out the right person to tag with @
@hwchase17 - project lead
Tracing / Callbacks
- @agola11
Async
- @agola11
DataLoaders
- @eyurtsev
Models
- @hwchase17
- @agola11
Agents / Tools / Toolkits
- @hwchase17
VectorStores / Retrievers / Memory
- @dev2049
-->
To bypass SSL verification errors during fetching, you can include the
`verify=False` parameter. This markdown proves useful, especially for
beginners in the field of web scraping.
<!--
Thank you for contributing to LangChain! Your PR will appear in our
release under the title you set. Please make sure it highlights your
valuable contribution.
Replace this with a description of the change, the issue it fixes (if
applicable), and relevant context. List any dependencies required for
this change.
After you're done, someone will review your PR. They may suggest
improvements. If no one reviews your PR within a few days, feel free to
@-mention the same people again, as notifications can get lost.
Finally, we'd love to show appreciation for your contribution - if you'd
like us to shout you out on Twitter, please also include your handle!
-->
Fixes#6079
#### Who can review?
Tag maintainers/contributors who might be interested:
@hwchase17
@eyurtsev
---------
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
To bypass SSL verification errors during web scraping, you can include
the ssl_verify=False parameter along with the headers parameter. This
combination of arguments proves useful, especially for beginners in the
field of web scraping.
<!--
Thank you for contributing to LangChain! Your PR will appear in our
release under the title you set. Please make sure it highlights your
valuable contribution.
Replace this with a description of the change, the issue it fixes (if
applicable), and relevant context. List any dependencies required for
this change.
After you're done, someone will review your PR. They may suggest
improvements. If no one reviews your PR within a few days, feel free to
@-mention the same people again, as notifications can get lost.
Finally, we'd love to show appreciation for your contribution - if you'd
like us to shout you out on Twitter, please also include your handle!
-->
Fixes#1829
#### Before submitting
<!-- If you're adding a new integration, please include:
1. a test for the integration - favor unit tests that does not rely on
network access.
2. an example notebook showing its use
See contribution guidelines for more information on how to write tests,
lint
etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->
#### Who can review?
Tag maintainers/contributors who might be interested:
@hwchase17 @eyurtsev
-->
---------
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Hi, I make a small improvement for BaseOpenAI.
I added a max_context_size attribute to BaseOpenAI so that we can get
the max context size directly instead of only getting the maximum token
size of the prompt through the max_tokens_for_prompt method.
Who can review?
@hwchase17 @agola11
I followed the [Common
Tasks](c7db9febb0/.github/CONTRIBUTING.md),
the test is all passed.
---------
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
LLM configurations can be loaded from a Python dict (or JSON file
deserialized as dict) using the
[load_llm_from_config](8e1a7a8646/langchain/llms/loading.py (L12))
function.
However, the type string in the `type_to_cls_dict` lookup dict differs
from the type string defined in some LLM classes. This means that the
LLM object can be saved, but not loaded again, because the type strings
differ.
<!--
Thank you for contributing to LangChain! Your PR will appear in our
release under the title you set. Please make sure it highlights your
valuable contribution.
After you're done, someone will review your PR. They may suggest
improvements. If no one reviews your PR within a few days, feel free to
@-mention the same people again, as notifications can get lost.
Finally, we'd love to show appreciation for your contribution - if you'd
like us to shout you out on Twitter, please also include your handle!
-->
The current version of chat history with DynamoDB doesn't handle the
case correctly when a table has no chat history. This change solves this
error handling.
<!-- Remove if not applicable -->
Fixes https://github.com/hwchase17/langchain/issues/6088
#### Who can review?
Tag maintainers/contributors who might be interested:
@hwchase17
<!-- For a quicker response, figure out the right person to tag with @
@hwchase17 - project lead
Tracing / Callbacks
- @agola11
Async
- @agola11
DataLoaders
- @eyurtsev
Models
- @hwchase17
- @agola11
Agents / Tools / Toolkits
- @hwchase17
VectorStores / Retrievers / Memory
- @dev2049
-->
Fixes#6131
Simply passes kwargs forward from similarity_search to helper functions
so that search_kwargs are applied to search as originally intended. See
bug for repro steps.
#### Who can review?
@hwchase17
@dev2049
Twitter: poshporcupine
Very small typo in the Constitutional AI critique default prompt. The
negation "If there is *no* material critique of ..." is used two times,
should be used only on the first one.
Cheers,
Pierre
<!--
Thank you for contributing to LangChain! Your PR will appear in our
release under the title you set. Please make sure it highlights your
valuable contribution.
Replace this with a description of the change, the issue it fixes (if
applicable), and relevant context. List any dependencies required for
this change.
After you're done, someone will review your PR. They may suggest
improvements. If no one reviews your PR within a few days, feel free to
@-mention the same people again, as notifications can get lost.
Finally, we'd love to show appreciation for your contribution - if you'd
like us to shout you out on Twitter, please also include your handle!
-->
Fixes https://github.com/hwchase17/langchain/issues/6208
<!-- Remove if not applicable -->
Fixes # (issue)
#### Before submitting
<!-- If you're adding a new integration, please include:
1. a test for the integration - favor unit tests that does not rely on
network access.
2. an example notebook showing its use
See contribution guidelines for more information on how to write tests,
lint
etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->
#### Who can review?
Tag maintainers/contributors who might be interested:
<!-- For a quicker response, figure out the right person to tag with @
@hwchase17 - project lead
Tracing / Callbacks
- @agola11
Async
- @agola11
DataLoaders
- @eyurtsev
Models
- @hwchase17
- @agola11
Agents / Tools / Toolkits
- @hwchase17
VectorStores / Retrievers / Memory
- @dev2049
-->
VectorStores / Retrievers / Memory
- @dev2049
Hot Fixes for Deep Lake [would highly appreciate expedited review]
* deeplake version was hardcoded and since deeplake upgraded the
integration fails with confusing error
* an additional integration test fixed due to embedding function
* Additionally fixed docs for code understanding links after docs
upgraded
* notebook removal of public parameter to make sure code understanding
notebook works
#### Who can review?
@hwchase17 @dev2049
---------
Co-authored-by: Davit Buniatyan <d@activeloop.ai>
Fixes#5807 (issue)
#### Who can review?
Tag maintainers/contributors who might be interested: @dev2049
<!-- For a quicker response, figure out the right person to tag with @
@hwchase17 - project lead
Tracing / Callbacks
- @agola11
Async
- @agola11
DataLoaders
- @eyurtsev
Models
- @hwchase17
- @agola11
Agents / Tools / Toolkits
- @hwchase17
VectorStores / Retrievers / Memory
- @dev2049
-->
Related to this https://github.com/hwchase17/langchain/issues/6225
Just copied the implementation from `generate` function to `agenerate`
and tested it.
Didn't run any official tests thought
<!--
Thank you for contributing to LangChain! Your PR will appear in our
release under the title you set. Please make sure it highlights your
valuable contribution.
Replace this with a description of the change, the issue it fixes (if
applicable), and relevant context. List any dependencies required for
this change.
After you're done, someone will review your PR. They may suggest
improvements. If no one reviews your PR within a few days, feel free to
@-mention the same people again, as notifications can get lost.
Finally, we'd love to show appreciation for your contribution - if you'd
like us to shout you out on Twitter, please also include your handle!
-->
<!-- Remove if not applicable -->
Fixes#6225
#### Before submitting
<!-- If you're adding a new integration, please include:
1. a test for the integration - favor unit tests that does not rely on
network access.
2. an example notebook showing its use
See contribution guidelines for more information on how to write tests,
lint
etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->
#### Who can review?
Tag maintainers/contributors who might be interested:
@hwchase17, @agola11
<!-- For a quicker response, figure out the right person to tag with @
@hwchase17 - project lead
Tracing / Callbacks
- @agola11
Async
- @agola11
DataLoaders
- @eyurtsev
Models
- @hwchase17
- @agola11
Agents / Tools / Toolkits
- @hwchase17
VectorStores / Retrievers / Memory
- @dev2049
-->
The LLM integration
[HuggingFaceTextGenInference](https://github.com/hwchase17/langchain/blob/master/langchain/llms/huggingface_text_gen_inference.py)
already has streaming support.
However, when streaming is enabled, it always returns an empty string as
the final output text when the LLM is finished. This is because `text`
is instantiated with an empty string and never updated.
This PR fixes the collection of the final output text by concatenating
new tokens.
Similar as https://github.com/hwchase17/langchain/pull/5818
Added the functionality to save/load Graph Cypher QA Chain due to a user
reporting the following error
> raise NotImplementedError("Saving not supported for this chain
type.")\nNotImplementedError: Saving not supported for this chain
type.\n'
In LangChain, all module classes are enumerated in the `__init__.py`
file of the correspondent module. But some classes were missed and were
not included in the module `__init__.py`
This PR:
- added the missed classes to the module `__init__.py` files
- `__init__.py:__all_` variable value (a list of the class names) was
sorted
- `langchain.tools.sql_database.tool.QueryCheckerTool` was renamed into
the `QuerySQLCheckerTool` because it conflicted with
`langchain.tools.spark_sql.tool.QueryCheckerTool`
- changes to `pyproject.toml`:
- added `pgvector` to `pyproject.toml:extended_testing`
- added `pandas` to
`pyproject.toml:[tool.poetry.group.test.dependencies]`
- commented out the `streamlit` from `collbacks/__init__.py`, It is
because now the `streamlit` requires Python >=3.7, !=3.9.7
- fixed duplicate names in `tools`
- fixed correspondent ut-s
#### Who can review?
@hwchase17
@dev2049
Fixed PermissionError that occurred when downloading PDF files via http
in BasePDFLoader on windows.
When downloading PDF files via http in BasePDFLoader, NamedTemporaryFile
is used.
This function cannot open the file again on **Windows**.[Python
Doc](https://docs.python.org/3.9/library/tempfile.html#tempfile.NamedTemporaryFile)
So, we created a **temporary directory** with TemporaryDirectory and
placed the downloaded file there.
temporary directory is deleted in the deconstruct.
Fixes#2698
#### Who can review?
Tag maintainers/contributors who might be interested:
- @eyurtsev
- @hwchase17
This will add the ability to add an AsyncCallbackManager (handler) for
the reducer chain, which would be able to stream the tokens via the
`async def on_llm_new_token` callback method
Fixes # (issue)
[5532](https://github.com/hwchase17/langchain/issues/5532)
@hwchase17 @agola11
The following code snippet explains how this change would be used to
enable `reduce_llm` with streaming support in a `map_reduce` chain
I have tested this change and it works for the streaming use-case of
reducer responses. I am happy to share more information if this makes
solution sense.
```
AsyncHandler
..........................
class StreamingLLMCallbackHandler(AsyncCallbackHandler):
"""Callback handler for streaming LLM responses."""
def __init__(self, websocket):
self.websocket = websocket
# This callback method is to be executed in async
async def on_llm_new_token(self, token: str, **kwargs: Any) -> None:
resp = ChatResponse(sender="bot", message=token, type="stream")
await self.websocket.send_json(resp.dict())
Chain
..........
stream_handler = StreamingLLMCallbackHandler(websocket)
stream_manager = AsyncCallbackManager([stream_handler])
streaming_llm = ChatOpenAI(
streaming=True,
callback_manager=stream_manager,
verbose=False,
temperature=0,
)
main_llm = OpenAI(
temperature=0,
verbose=False,
)
doc_chain = load_qa_chain(
llm=main_llm,
reduce_llm=streaming_llm,
chain_type="map_reduce",
callback_manager=manager
)
qa_chain = ConversationalRetrievalChain(
retriever=vectorstore.as_retriever(),
combine_docs_chain=doc_chain,
question_generator=question_generator,
callback_manager=manager,
)
# Here `acall` will trigger `acombine_docs` on `map_reduce` which should then call `_aprocess_result` which in turn will call `self.combine_document_chain.arun` hence async callback will be awaited
result = await qa_chain.acall(
{"question": question, "chat_history": chat_history}
)
```
Hi again @agola11! 🤗
## What's in this PR?
After playing around with different chains we noticed that some chains
were using different `output_key`s and we were just handling some, so
we've extended the support to any output, either if it's a Python list
or a string.
Kudos to @dvsrepo for spotting this!
---------
Co-authored-by: Daniel Vila Suero <daniel@argilla.io>
<!--
Thank you for contributing to LangChain! Your PR will appear in our
release under the title you set. Please make sure it highlights your
valuable contribution.
Replace this with a description of the change, the issue it fixes (if
applicable), and relevant context. List any dependencies required for
this change.
After you're done, someone will review your PR. They may suggest
improvements. If no one reviews your PR within a few days, feel free to
@-mention the same people again, as notifications can get lost.
Finally, we'd love to show appreciation for your contribution - if you'd
like us to shout you out on Twitter, please also include your handle!
-->
<!-- Remove if not applicable -->
Fixes https://github.com/ShreyaR/guardrails/issues/155
Enables guardrails reasking by specifying an LLM api in the output
parser.