@vowelparrot @hwchase17 Here a new implementation of
`acompress_documents` for `LLMChainExtractor ` without changes to the
sync-version, as you suggested in #3587 / [Async Support for
LLMChainExtractor](https://github.com/hwchase17/langchain/pull/3587) .
I created a new PR to avoid cluttering history with reverted commits,
hope that is the right way.
Happy for any improvements/suggestions.
(PS:
I also tried an alternative implementation with a nested helper function
like
``` python
async def acompress_documents_old(
self, documents: Sequence[Document], query: str
) -> Sequence[Document]:
"""Compress page content of raw documents."""
async def _compress_concurrently(doc):
_input = self.get_input(query, doc)
output = await self.llm_chain.apredict_and_parse(**_input)
return Document(page_content=output, metadata=doc.metadata)
outputs=await asyncio.gather(*[_compress_concurrently(doc) for doc in documents])
compressed_docs=list(filter(lambda x: len(x.page_content)>0,outputs))
return compressed_docs
```
But in the end I found the commited version to be better readable and
more "canonical" - hope you agree.
Related to [this
issue.](https://github.com/hwchase17/langchain/issues/3655#issuecomment-1529415363)
The `Mapped` SQLAlchemy class is introduced in SQLAlchemy 1.4 but the
migration from 1.3 to 1.4 is quite challenging so, IMO, it's better to
keep backwards compatibility and not change the SQLAlchemy requirements
just because of type annotations.
This PR fixes the "SyntaxError: invalid escape sequence" error in the
pydantic.py file. The issue was caused by the backslashes in the regular
expression pattern being treated as escape characters. By using a raw
string literal for the regex pattern (e.g., r"\{.*\}"), this fix ensures
that backslashes are treated as literal characters, thus preventing the
error.
Co-authored-by: Tomer Levy <tomer.levy@tipalti.com>
Seems the pyllamacpp package is no longer the supported bindings from
gpt4all. Tested that this works locally.
Given that the older models weren't very performant, I think it's better
to migrate now without trying to include a lot of try / except blocks
---------
Co-authored-by: Nissan Pow <npow@users.noreply.github.com>
Co-authored-by: Nissan Pow <pownissa@amazon.com>
### Summary
Adds `UnstructuredAPIFileLoaders` and `UnstructuredAPIFIleIOLoaders`
that partition documents through the Unstructured API. Defaults to the
URL for hosted Unstructured API, but can switch to a self hosted or
locally running API using the `url` kwarg. Currently, the Unstructured
API is open and does not require an API, but it will soon. A note was
added about that to the Unstructured ecosystem page.
### Testing
```python
from langchain.document_loaders import UnstructuredAPIFileIOLoader
filename = "fake-email.eml"
with open(filename, "rb") as f:
loader = UnstructuredAPIFileIOLoader(file=f, file_filename=filename)
docs = loader.load()
docs[0]
```
```python
from langchain.document_loaders import UnstructuredAPIFileLoader
filename = "fake-email.eml"
loader = UnstructuredAPIFileLoader(file_path=filename, mode="elements")
docs = loader.load()
docs[0]
```
- ActionAgent has a property called, `allowed_tools`, which is declared
as `List`. It stores all provided tools which is available to use during
agent action.
- This collection shouldn’t allow duplicates. The original datatype List
doesn’t make sense. Each tool should be unique. Even when there are
variants (assuming in the future), it would be named differently in
load_tools.
Test:
- confirm the functionality in an example by initializing an agent with
a list of 2 tools and confirm everything works.
```python3
def test_agent_chain_chat_bot():
from langchain.agents import load_tools
from langchain.agents import initialize_agent
from langchain.agents import AgentType
from langchain.chat_models import ChatOpenAI
from langchain.llms import OpenAI
from langchain.utilities.duckduckgo_search import DuckDuckGoSearchAPIWrapper
chat = ChatOpenAI(temperature=0)
llm = OpenAI(temperature=0)
tools = load_tools(["ddg-search", "llm-math"], llm=llm)
agent = initialize_agent(tools, chat, agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
agent.run("Who is Olivia Wilde's boyfriend? What is his current age raised to the 0.23 power?")
test_agent_chain_chat_bot()
```
Result:
<img width="863" alt="Screenshot 2023-05-01 at 7 58 11 PM"
src="https://user-images.githubusercontent.com/62768671/235572157-0937594c-ddfb-4760-acb2-aea4cacacd89.png">
Modified Modern Treasury and Strip slightly so credentials don't have to
be passed in explicitly. Thanks @mattgmarcus for adding Modern Treasury!
---------
Co-authored-by: Matt Marcus <matt.g.marcus@gmail.com>
Haven't gotten to all of them, but this:
- Updates some of the tools notebooks to actually instantiate a tool
(many just show a 'utility' rather than a tool. More changes to come in
separate PR)
- Move the `Tool` and decorator definitions to `langchain/tools/base.py`
(but still export from `langchain.agents`)
- Add scene explain to the load_tools() function
- Add unit tests for public apis for the langchain.tools and langchain.agents modules
Move tool validation to each implementation of the Agent.
Another alternative would be to adjust the `_validate_tools()` signature
to accept the output parser (and format instructions) and add logic
there. Something like
`parser.outputs_structured_actions(format_instructions)`
But don't think that's needed right now.
History from Motorhead memory return in reversed order
It should be Human: 1, AI:..., Human: 2, Ai...
```
You are a chatbot having a conversation with a human.
AI: I'm sorry, I'm still not sure what you're trying to communicate. Can you please provide more context or information?
Human: 3
AI: I'm sorry, I'm not sure what you mean by "1" and "2". Could you please clarify your request or question?
Human: 2
AI: Hello, how can I assist you today?
Human: 1
Human: 4
AI:
```
So, i `reversed` the messages before putting in chat_memory.
The llm type of AzureOpenAI was previously set to default, which is
openai. But since AzureOpenAI has different API from openai, it creates
problems when doing chain saving and loading. This PR corrected the llm
type of AzureOpenAI to "azure"
Re: https://github.com/hwchase17/langchain/issues/3777
Copy pasting from the issue:
While working on https://github.com/hwchase17/langchain/issues/3722 I
have noticed that there might be a bug in the current implementation of
the OpenAI length safe embeddings in `_get_len_safe_embeddings`, which
before https://github.com/hwchase17/langchain/issues/3722 was actually
the **default implementation** regardless of the length of the context
(via https://github.com/hwchase17/langchain/pull/2330).
It appears the weights used are constant and the length of the embedding
vector (1536) and NOT the number of tokens in the batch, as in the
reference implementation at
https://github.com/openai/openai-cookbook/blob/main/examples/Embedding_long_inputs.ipynb
<hr>
Here's some debug info:
<img width="1094" alt="image"
src="https://user-images.githubusercontent.com/1419010/235286595-a8b55298-7830-45df-b9f7-d2a2ad0356e0.png">
<hr>
We can also validate this against the reference implementation:
<details>
<summary>Reference implementation (click to unroll)</summary>
This implementation is copy pasted from
https://github.com/openai/openai-cookbook/blob/main/examples/Embedding_long_inputs.ipynb
```py
import openai
from itertools import islice
import numpy as np
from tenacity import retry, wait_random_exponential, stop_after_attempt, retry_if_not_exception_type
EMBEDDING_MODEL = 'text-embedding-ada-002'
EMBEDDING_CTX_LENGTH = 8191
EMBEDDING_ENCODING = 'cl100k_base'
# let's make sure to not retry on an invalid request, because that is what we want to demonstrate
@retry(wait=wait_random_exponential(min=1, max=20), stop=stop_after_attempt(6), retry=retry_if_not_exception_type(openai.InvalidRequestError))
def get_embedding(text_or_tokens, model=EMBEDDING_MODEL):
return openai.Embedding.create(input=text_or_tokens, model=model)["data"][0]["embedding"]
def batched(iterable, n):
"""Batch data into tuples of length n. The last batch may be shorter."""
# batched('ABCDEFG', 3) --> ABC DEF G
if n < 1:
raise ValueError('n must be at least one')
it = iter(iterable)
while (batch := tuple(islice(it, n))):
yield batch
def chunked_tokens(text, encoding_name, chunk_length):
encoding = tiktoken.get_encoding(encoding_name)
tokens = encoding.encode(text)
chunks_iterator = batched(tokens, chunk_length)
yield from chunks_iterator
def reference_safe_get_embedding(text, model=EMBEDDING_MODEL, max_tokens=EMBEDDING_CTX_LENGTH, encoding_name=EMBEDDING_ENCODING, average=True):
chunk_embeddings = []
chunk_lens = []
for chunk in chunked_tokens(text, encoding_name=encoding_name, chunk_length=max_tokens):
chunk_embeddings.append(get_embedding(chunk, model=model))
chunk_lens.append(len(chunk))
if average:
chunk_embeddings = np.average(chunk_embeddings, axis=0, weights=chunk_lens)
chunk_embeddings = chunk_embeddings / np.linalg.norm(chunk_embeddings) # normalizes length to 1
chunk_embeddings = chunk_embeddings.tolist()
return chunk_embeddings
```
</details>
```py
long_text = 'foo bar' * 5000
reference_safe_get_embedding(long_text, average=True)[:10]
# Here's the first 10 floats from the reference embeddings:
[0.004407593824276758,
0.0017611146161865465,
-0.019824815970984996,
-0.02177626039794025,
-0.012060967454897886,
0.0017955296329155309,
-0.015609168983609643,
-0.012059823076681351,
-0.016990468527792825,
-0.004970484452089445]
# and now langchain implementation
from langchain.embeddings.openai import OpenAIEmbeddings
OpenAIEmbeddings().embed_query(long_text)[:10]
[0.003791506184693747,
0.0025310066579390025,
-0.019282322699514628,
-0.021492679249899803,
-0.012598522213242891,
0.0022181168611315662,
-0.015858940621301307,
-0.011754004130791204,
-0.016402944319627515,
-0.004125287485127554]
# clearly they are different ^
```
- Add langchain.llms.GooglePalm for text completion,
- Add langchain.chat_models.ChatGooglePalm for chat completion,
- Add langchain.embeddings.GooglePalmEmbeddings for sentence embeddings,
- Add example field to HumanMessage and AIMessage so that users can feed
in examples into the PaLM Chat API,
- Add system and unit tests.
Note async completion for the Text API is not yet supported and will be
included in a future PR.
Happy for feedback on any aspect of this PR, especially our choice of
adding an example field to Human and AI Message objects to enable
passing example messages to the API.
This pull request adds unit tests for various output parsers
(BooleanOutputParser, CommaSeparatedListOutputParser, and
StructuredOutputParser) to ensure their correct functionality and to
increase code reliability and maintainability. The tests cover both
valid and invalid input cases.
Changes:
Added unit tests for BooleanOutputParser.
Added unit tests for CommaSeparatedListOutputParser.
Added unit tests for StructuredOutputParser.
Testing:
All new unit tests have been executed, and they pass successfully.
The overall test suite has been run, and all tests pass.
Notes:
These tests cover both successful parsing scenarios and error handling
for invalid inputs.
If any new output parsers are added in the future, corresponding unit
tests should also be created to maintain coverage.
With longer context and completions, gpt-3.5-turbo and, especially,
gpt-4, will more times than not take > 60seconds to respond.
Based on some other discussions, it seems like this is an increasingly
common problem, especially with summarization tasks.
- https://github.com/hwchase17/langchain/issues/3512
- https://github.com/hwchase17/langchain/issues/3005
OpenAI's max 600s timeout seems excessive, so I settled on 120, but I do
run into generations that take >240 seconds when using large prompts and
completions with GPT-4, so maybe 240 would be a better compromise?