Sometimes the LLM response (generated code) tends to miss the ending
ticks "```". Therefore causing the text parsing to fail due to not
enough values to unpack.
The 2 extra `_` don't add value and can cause errors. Suggest to simply
update the `_, action, _` to just `action` then with index.
Fixes issue #3057
This pull request addresses the need to share a single `chromadb.Client`
instance across multiple instances of the `Chroma` class. By
implementing a shared client, we can maintain consistency and reduce
resource usage when multiple instances of the `Chroma` classes are
created. This is especially relevant in a web app, where having multiple
`Chroma` instances with a `persist_directory` leads to these clients not
being synced.
This PR implements this option while keeping the rest of the
architecture unchanged.
**Changes:**
1. Add a client attribute to the `Chroma` class to store the shared
`chromadb.Client` instance.
2. Modify the `from_documents` method to accept an optional client
parameter.
3. Update the `from_documents` method to use the shared client if
provided or create a new client if not provided.
Let me know if anything needs to be modified - thanks again for your
work on this incredible repo
This PR extends upon @jzluo 's PR #2748 which addressed dialect-specific
issues with SQL prompts, and adds a prompt that uses backticks for
column names when querying BigQuery. See [GoogleSQL quoted
identifiers](https://cloud.google.com/bigquery/docs/reference/standard-sql/lexical#quoted_identifiers).
Additionally, the SQL agent currently uses a generic prompt. Not sure
how best to adopt the same optional dialect-specific prompts as above,
but will consider making an issue and PR for that too. See
[langchain/agents/agent_toolkits/sql/prompt.py](langchain/agents/agent_toolkits/sql/prompt.py).
### Description
Pass kwargs to get OpenSearch client from `from_texts` function
### Issues Resolved
https://github.com/hwchase17/langchain/issues/2819
Signed-off-by: Naveen Tatikonda <navtat@amazon.com>
`langchain.prompts.PromptTemplate` is unable to infer `input_variables`
from jinja2 template.
```python
# Using langchain v0.0.141
template_string = """\
Hello world
Your variable: {{ var }}
{# This will not get rendered #}
{% if verbose %}
Congrats! You just turned on verbose mode and got extra messages!
{% endif %}
"""
template = PromptTemplate.from_template(template_string, template_format="jinja2")
print(template.input_variables) # Output ['# This will not get rendered #', '% endif %', '% if verbose %']
```
---------
Co-authored-by: engkheng <ongengkheng929@example.com>
- Updated `langchain/docs/modules/models/llms/integrations/` notebooks:
added links to the original sites, the install information, etc.
- Added the `nlpcloud` notebook.
- Removed "Example" from Titles of some notebooks, so all notebook
titles are consistent.
### https://github.com/hwchase17/langchain/issues/2997
Replaced `conversation.memory.store` to
`conversation.memory.entity_store.store`
As conversation.memory.store doesn't exist and re-ran the whole file.
allows the user to catch the issue and handle it rather than failing
hard.
This happens more than you'd expect when using output parsers with
chatgpt, especially if the temp is anything but 0. Sometimes it doesn't
want to listen and just does its own thing.
Not sure what happened here but some of the file got overwritten by
#2859 which broke filtering logic.
Here is it fixed back to normal.
@hwchase17 can we expedite this if possible :-)
---------
Co-authored-by: Altay Sansal <altay.sansal@tgs.com>
- Most important - fixes the relevance_fn name in the notebook to align
with the docs
- Updates comments for the summary:
<img width="787" alt="image"
src="https://user-images.githubusercontent.com/130414180/232520616-2a99e8c3-a821-40c2-a0d5-3f3ea196c9bb.png">
- The new conversation is a bit better, still unfortunate they try to
schedule a followup.
- Rm the max dialogue turns argument to the conversation function
Add a time-weighted memory retriever and a notebook that approximates a
Generative Agent from https://arxiv.org/pdf/2304.03442.pdf
The "daily plan" components are removed for now since they are less
useful without a virtual world, but the memory is an interesting
component to build off.
---------
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
### Background
Continuing to implement all the interface methods defined by the
`VectorStore` class. This PR pertains to implementation of the
`max_marginal_relevance_search` method.
### Changes
- a `max_marginal_relevance_search` method implementation has been added
in `weaviate.py`
- tests have been added to the the new method
- vcr cassettes have been added for the weaviate tests
### Test Plan
Added tests for the `max_marginal_relevance_search` implementation
### Change Safety
- [x] I have added tests to cover my changes
- Modify SVMRetriever class to add an optional relevancy_threshold
- Modify SVMRetriever.get_relevant_documents method to filter out
documents with similarity scores below the relevancy threshold
- Normalized the similarities to be between 0 and 1 so the
relevancy_threshold makes more sense
- The number of results are limited to the top k documents or the
maximum number of relevant documents above the threshold, whichever is
smaller
This code will now return the top self.k results (or less, if there are
not enough results that meet the self.relevancy_threshold criteria).
The svm.LinearSVC implementation in scikit-learn is non-deterministic,
which means
SVMRetriever.from_texts(["bar", "world", "foo", "hello", "foo bar"])
could return [3 0 5 4 2 1] instead of [0 3 5 4 2 1] with a query of
"foo".
If you pass in multiple "foo" texts, the order could be different each
time. Here, we only care if the 0 is the first element, otherwise it
will offset the text and similarities.
Example:
```python
retriever = SVMRetriever.from_texts(
["foo", "bar", "world", "hello", "foo bar"],
OpenAIEmbeddings(),
k=4,
relevancy_threshold=.25
)
result = retriever.get_relevant_documents("foo")
```
yields
```python
[Document(page_content='foo', metadata={}), Document(page_content='foo bar', metadata={})]
```
---------
Co-authored-by: Brandon Sandoval <52767641+account00001@users.noreply.github.com>
re
https://github.com/hwchase17/langchain/issues/439#issuecomment-1510442791
I think it's not polite for a library to use the root logger
both of these forms are also used:
```
logger = logging.getLogger(__name__)
logger = logging.getLogger(__file__)
```
I am not sure if there is any reason behind one vs the other? (...I am
guessing maybe just contributed by different people)
it seems to me it'd be better to consistently use
`logging.getLogger(__name__)`
this makes it easier for consumers of the library to set up log
handlers, e.g. for everything with `langchain.` prefix
Use numexpr evaluate instead of the python REPL to avoid malicious code
injection.
Tested against the (limited) math dataset and got the same score as
before.
For more permissive tools (like the REPL tool itself), other approaches
ought to be provided (some combination of Sanitizer + Restricted python
+ unprivileged-docker + ...), but for a calculator tool, only
mathematical expressions should be permitted.
See https://github.com/hwchase17/langchain/issues/814
Last week I added the `PDFMinerPDFasHTMLLoader`. I am adding some
example code in the notebook to serve as a tutorial for how that loader
can be used to create snippets of a pdf that are structured within
sections. All the other loaders only provide the `Document` objects
segmented by pages but that's pretty loose given the amount of other
metadata that can be extracted.
With the new loader, one can leverage font-size of the text to decide
when a new sections starts and can segment the text more semantically as
shown in the tutorial notebook. The cell shows that we are able to find
the content of entire section under **Related Work** for the example pdf
which is spread across 2 pages and hence is stored as two separate
documents by other loaders
Fixes a bug I was seeing when the `TokenTextSplitter` was correctly
splitting text under the gpt3.5-turbo token limit, but when firing the
prompt off too openai, it'd come back with an error that we were over
the context limit.
gpt3.5-turbo and gpt-4 use `cl100k_base` tokenizer, and so the counts
are just always off with the default `gpt-2` encoder.
It's possible to pass along the encoding to the `TokenTextSplitter`, but
it's much simpler to pass the model name of the LLM. No more concern
about keeping the tokenizer and llm model in sync :)
I got the following stacktrace when the agent was trying to search
Wikipedia with a huge query:
```
Thought:{
"action": "Wikipedia",
"action_input": "Outstanding is a song originally performed by the Gap Band and written by member Raymond Calhoun. The song originally appeared on the group's platinum-selling 1982 album Gap Band IV. It is one of their signature songs and biggest hits, reaching the number one spot on the U.S. R&B Singles Chart in February 1983. \"Outstanding\" peaked at number 51 on the Billboard Hot 100."
}
Traceback (most recent call last):
File "/usr/src/app/tests/chat.py", line 121, in <module>
answer = agent_chain.run(input=question)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/langchain/chains/base.py", line 216, in run
return self(kwargs)[self.output_keys[0]]
^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/langchain/chains/base.py", line 116, in __call__
raise e
File "/usr/local/lib/python3.11/site-packages/langchain/chains/base.py", line 113, in __call__
outputs = self._call(inputs)
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/langchain/agents/agent.py", line 828, in _call
next_step_output = self._take_next_step(
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/langchain/agents/agent.py", line 725, in _take_next_step
observation = tool.run(
^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/langchain/tools/base.py", line 73, in run
raise e
File "/usr/local/lib/python3.11/site-packages/langchain/tools/base.py", line 70, in run
observation = self._run(tool_input)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/langchain/agents/tools.py", line 17, in _run
return self.func(tool_input)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/langchain/utilities/wikipedia.py", line 40, in run
search_results = self.wiki_client.search(query)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/wikipedia/util.py", line 28, in __call__
ret = self._cache[key] = self.fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/wikipedia/wikipedia.py", line 109, in search
raise WikipediaException(raw_results['error']['info'])
wikipedia.exceptions.WikipediaException: An unknown error occured: "Search request is longer than the maximum allowed length. (Actual: 373; allowed: 300)". Please report it on GitHub!
```
This commit limits the maximum size of the query passed to Wikipedia to
avoid this issue.