Add DocumentTransformer abstraction so that in #2915 we don't have to
wrap TextSplitter and RedundantEmbeddingFilter (neither of which uses
the query) in the contextual doc compression abstractions. with this
change, doc filter (doc extractor, whatever we call it) would look
something like
```python
class BaseDocumentFilter(BaseDocumentTransformer[_RetrievedDocument], ABC):
@abstractmethod
def filter(self, documents: List[_RetrievedDocument], query: str) -> List[_RetrievedDocument]:
...
def transform_documents(self, documents: List[_RetrievedDocument], query: Optional[str] = None, **kwargs: Any) -> List[_RetrievedDocument]:
if query is None:
raise ValueError("Must pass in non-null query to DocumentFilter")
return self.filter(documents, query)
```
I have noticed a typo error in the `custom_mrkl_agents.ipynb` document
while trying the example from the documentation page. As a result, I
have opened a pull request (PR) to address this minor issue, even though
it may seem insignificant 😂.
The following calls were throwing an exception:
575b717d10/docs/use_cases/evaluation/agent_vectordb_sota_pg.ipynb (L192)575b717d10/docs/use_cases/evaluation/agent_vectordb_sota_pg.ipynb (L239)
Exception:
```
---------------------------------------------------------------------------
ValidationError Traceback (most recent call last)
Cell In[14], line 1
----> 1 chain_sota = RetrievalQA.from_chain_type(llm=OpenAI(temperature=0), chain_type="stuff", retriever=vectorstore_sota, input_key="question")
File ~/github/langchain/venv/lib/python3.9/site-packages/langchain/chains/retrieval_qa/base.py:89, in BaseRetrievalQA.from_chain_type(cls, llm, chain_type, chain_type_kwargs, **kwargs)
85 _chain_type_kwargs = chain_type_kwargs or {}
86 combine_documents_chain = load_qa_chain(
87 llm, chain_type=chain_type, **_chain_type_kwargs
88 )
---> 89 return cls(combine_documents_chain=combine_documents_chain, **kwargs)
File ~/github/langchain/venv/lib/python3.9/site-packages/pydantic/main.py:341, in pydantic.main.BaseModel.__init__()
ValidationError: 1 validation error for RetrievalQA
retriever
instance of BaseRetriever expected (type=type_error.arbitrary_type; expected_arbitrary_type=BaseRetriever)
```
The vectorstores had to be converted to retrievers:
`vectorstore_sota.as_retriever()` and `vectorstore_pg.as_retriever()`.
The PR also:
- adds the file `paul_graham_essay.txt` referenced by this notebook
- adds to gitignore *.pkl and *.bin files that are generated by this
notebook
Interestingly enough, the performance of the prediction greatly
increased (new version of langchain or ne version of OpenAI models since
the last run of the notebook): from 19/33 correct to 28/33 correct!
- Remove dynamic model creation in the `args()` property. _Only infer
for the decorator (and add an argument to NOT infer if someone wishes to
only pass as a string)_
- Update the validation example to make it less likely to be
misinterpreted as a "safe" way to run a repl
There is one example of "Multi-argument tools" in the custom_tools.ipynb
from yesterday, but we could add more. The output parsing for the base
MRKL agent hasn't been adapted to handle structured args at this point
in time
---------
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
## Background
This PR fixes this error when there are special tokens when querying the
chain:
```
Encountered text corresponding to disallowed special token '<|endofprompt|>'.
If you want this text to be encoded as a special token, pass it to `allowed_special`, e.g. `allowed_special={'<|endofprompt|>', ...}`.
If you want this text to be encoded as normal text, disable the check for this token by passing `disallowed_special=(enc.special_tokens_set - {'<|endofprompt|>'})`.
To disable this check for all special tokens, pass `disallowed_special=()`.
```
Refer to the code snippet below, it breaks in the chain line.
```
chain = ConversationalRetrievalChain.from_llm(
ChatOpenAI(openai_api_key=OPENAI_API_KEY),
retriever=vectorstore.as_retriever(),
qa_prompt=prompt,
condense_question_prompt=condense_prompt,
)
answer = chain({"question": f"{question}"})
```
However `ChatOpenAI` class is not accepting `allowed_special` and
`disallowed_special` at the moment so they cannot be passed to the
`encode()` in `get_num_tokens` method to avoid the errors.
## Change
- Add `allowed_special` and `disallowed_special` attributes to
`BaseOpenAI` class.
- Pass in `allowed_special` and `disallowed_special` as arguments of
`encode()` in tiktoken.
---------
Co-authored-by: samcarmen <“carmen.samkahman@gmail.com”>
I made a couple of improvements to the Comet tracker:
* The Comet project name is configurable in various ways (code,
environment variable or file), having a default value in code meant that
users couldn't set the project name in an environment variable or in a
file.
* I added error catching when the `flush_tracker` is called in order to
avoid crashing the whole process. Instead we are gonna display a warning
or error log message (`extra={"show_traceback": True}` is an internal
convention to force the display of the traceback when using our own
logger).
I decided to add the error catching after seeing the following error in
the third example of the notebook:
```
COMET ERROR: Failed to export agent or LLM to Comet
Traceback (most recent call last):
File "/home/lothiraldan/project/cometml/langchain/langchain/callbacks/comet_ml_callback.py", line 484, in _log_model
langchain_asset.save(langchain_asset_path)
File "/home/lothiraldan/project/cometml/langchain/langchain/agents/agent.py", line 591, in save
raise ValueError(
ValueError: Saving not supported for agent executors. If you are trying to save the agent, please use the `.save_agent(...)`
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/lothiraldan/project/cometml/langchain/langchain/callbacks/comet_ml_callback.py", line 449, in flush_tracker
self._log_model(langchain_asset)
File "/home/lothiraldan/project/cometml/langchain/langchain/callbacks/comet_ml_callback.py", line 488, in _log_model
langchain_asset.save_agent(langchain_asset_path)
File "/home/lothiraldan/project/cometml/langchain/langchain/agents/agent.py", line 599, in save_agent
return self.agent.save(file_path)
File "/home/lothiraldan/project/cometml/langchain/langchain/agents/agent.py", line 145, in save
agent_dict = self.dict()
File "/home/lothiraldan/project/cometml/langchain/langchain/agents/agent.py", line 119, in dict
_dict = super().dict()
File "pydantic/main.py", line 449, in pydantic.main.BaseModel.dict
File "pydantic/main.py", line 868, in _iter
File "pydantic/main.py", line 743, in pydantic.main.BaseModel._get_value
File "/home/lothiraldan/project/cometml/langchain/langchain/schema.py", line 381, in dict
output_parser_dict["_type"] = self._type
File "/home/lothiraldan/project/cometml/langchain/langchain/schema.py", line 376, in _type
raise NotImplementedError
NotImplementedError
```
I still need to investigate and try to fix it, it looks related to
saving an agent to a file.
## Use `index_id` over `app_id`
We made a major update to index + retrieve based on Metal Indexes
(instead of apps). With this change, we accept an index instead of an
app in each of our respective core apis. [More details
here](https://docs.getmetal.io/api-reference/core/indexing).
## What is this PR for:
* This PR adds a commented line of code in the documentation that shows
how someone can use the Pinecone client with an already existing
Pinecone index
* The documentation currently only shows how to create a pinecone index
from langchain documents but not how to load one that already exists
Sometimes the LLM response (generated code) tends to miss the ending
ticks "```". Therefore causing the text parsing to fail due to not
enough values to unpack.
The 2 extra `_` don't add value and can cause errors. Suggest to simply
update the `_, action, _` to just `action` then with index.
Fixes issue #3057
This pull request addresses the need to share a single `chromadb.Client`
instance across multiple instances of the `Chroma` class. By
implementing a shared client, we can maintain consistency and reduce
resource usage when multiple instances of the `Chroma` classes are
created. This is especially relevant in a web app, where having multiple
`Chroma` instances with a `persist_directory` leads to these clients not
being synced.
This PR implements this option while keeping the rest of the
architecture unchanged.
**Changes:**
1. Add a client attribute to the `Chroma` class to store the shared
`chromadb.Client` instance.
2. Modify the `from_documents` method to accept an optional client
parameter.
3. Update the `from_documents` method to use the shared client if
provided or create a new client if not provided.
Let me know if anything needs to be modified - thanks again for your
work on this incredible repo
This PR extends upon @jzluo 's PR #2748 which addressed dialect-specific
issues with SQL prompts, and adds a prompt that uses backticks for
column names when querying BigQuery. See [GoogleSQL quoted
identifiers](https://cloud.google.com/bigquery/docs/reference/standard-sql/lexical#quoted_identifiers).
Additionally, the SQL agent currently uses a generic prompt. Not sure
how best to adopt the same optional dialect-specific prompts as above,
but will consider making an issue and PR for that too. See
[langchain/agents/agent_toolkits/sql/prompt.py](langchain/agents/agent_toolkits/sql/prompt.py).
### Description
Pass kwargs to get OpenSearch client from `from_texts` function
### Issues Resolved
https://github.com/hwchase17/langchain/issues/2819
Signed-off-by: Naveen Tatikonda <navtat@amazon.com>
`langchain.prompts.PromptTemplate` is unable to infer `input_variables`
from jinja2 template.
```python
# Using langchain v0.0.141
template_string = """\
Hello world
Your variable: {{ var }}
{# This will not get rendered #}
{% if verbose %}
Congrats! You just turned on verbose mode and got extra messages!
{% endif %}
"""
template = PromptTemplate.from_template(template_string, template_format="jinja2")
print(template.input_variables) # Output ['# This will not get rendered #', '% endif %', '% if verbose %']
```
---------
Co-authored-by: engkheng <ongengkheng929@example.com>