While it might be a bit more restrictive, I find that using the
Embedding interface as an input for the vector store creation is better
than an embedding function because we can use bulk requests and possibly
the retry logic if needed.
I have seen that some vector store implementations use Embedding while
others use embedding function so I don't know what is the criteria to
have one or the other, in my opinion they should all just be Embedding
or have a way more complex embedding function that accepts multiple
texts instead of one by one.
---------
Co-authored-by: Bernat Felip <bernat.felip@rea.ch>
I got this during testing
```
ValueError: Missing some input keys: {'existing_answer'}
```
Upon review, the initial prompt should be `QUESTION_PROMPT_SELECTOR`.
Co-authored-by: Bao Nguyen <bnguyen@roku.com>
Regarding [this
issue](https://github.com/hwchase17/langchain/issues/1754),
`BasePromptTample` class docstring is a little outdated, thus it
requires new method `format_prompt` for now.
As such, I have made some modifications to the docstring to bring it up
to date.
I tried to adhere to the established document style, and would
appreciate you for taking a look at this PR.
Fix#1756
Use the `namespace` argument of `Pinecone.from_exisiting_index` to set
the default value of `namespace` for other methods. Leads to more
expected behavior and easier integration in chains.
For the test, I've added a line to delete and rebuild the
`langchain-demo` index at the beginning of the test. I'm not 100% sure
if it's a good idea but it makes the test reproducible.
New to Langchain, was a bit confused where I should find the toolkits
section when I'm at `agent/key_concepts` docs. I added a short link that
points to the how to section.
While testing out `VectorDBQA` as a `Tool` for one of the conversation,
I happened to get a response from LLM (OpenAI) like this
<code>
Could not parse LLM output: Here's a response using the Product Search
tool:
```json
{
"action": "Product Search",
"action_input": "pots for plants"
}
```
This will allow you to search for pots for your plants and find a
variety of options that are available for purchase. You can use this
information to choose the pots that best fit your needs and preferences.
</code>
i.e. The response had a text before & *after* the expected JSON, leading
to `JSONDecodeError`. It's fixed now, by removing text after '```' to
remove unwanted text.
The error I encountered in this Jupyter Notebook -
[link](https://github.com/anselm94/chatbot-llm-ecommerce/blob/main/chatcommerce.ipynb)
<details>
<summary>Error encountered</summary>
<code>
---------------------------------------------------------------------------
JSONDecodeError Traceback (most recent call last)
File
~/Git/chatbot-llm-ecommerce/.venv/lib/python3.11/site-packages/langchain/agents/conversational_chat/base.py:104,
in ConversationalChatAgent._extract_tool_and_input(self, llm_output)
103 try:
--> 104 response = self.output_parser.parse(llm_output)
105 return response["action"], response["action_input"]
File
~/Git/chatbot-llm-ecommerce/.venv/lib/python3.11/site-packages/langchain/agents/conversational_chat/base.py:49,
in AgentOutputParser.parse(self, text)
48 cleaned_output = cleaned_output.strip()
---> 49 response = json.loads(cleaned_output)
50 return {"action": response["action"], "action_input":
response["action_input"]}
File
/opt/homebrew/Cellar/python@3.11/3.11.2_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/json/__init__.py:346,
in loads(s, cls, object_hook, parse_float, parse_int, parse_constant,
object_pairs_hook, **kw)
343 if (cls is None and object_hook is None and
344 parse_int is None and parse_float is None and
345 parse_constant is None and object_pairs_hook is None and not kw):
--> 346 return _default_decoder.decode(s)
347 if cls is None:
File
/opt/homebrew/Cellar/python@3.11/3.11.2_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/json/decoder.py:340,
in JSONDecoder.decode(self, s, _w)
339 if end != len(s):
--> 340 raise JSONDecodeError("Extra data", s, end)
341 return obj
JSONDecodeError: Extra data: line 5 column 1 (char 74)
During handling of the above exception, another exception occurred:
ValueError Traceback (most recent call last)
Cell In[22], line 1
----> 1 ask_ai.run("Yes. I need pots for my plants")
File
~/Git/chatbot-llm-ecommerce/.venv/lib/python3.11/site-packages/langchain/chains/base.py:213,
in Chain.run(self, *args, **kwargs)
211 if len(args) != 1:
212 raise ValueError("`run` supports only one positional argument.")
--> 213 return self(args[0])[self.output_keys[0]]
215 if kwargs and not args:
216 return self(kwargs)[self.output_keys[0]]
File
~/Git/chatbot-llm-ecommerce/.venv/lib/python3.11/site-packages/langchain/chains/base.py:116,
in Chain.__call__(self, inputs, return_only_outputs)
114 except (KeyboardInterrupt, Exception) as e:
115 self.callback_manager.on_chain_error(e, verbose=self.verbose)
--> 116 raise e
117 self.callback_manager.on_chain_end(outputs, verbose=self.verbose)
118 return self.prep_outputs(inputs, outputs, return_only_outputs)
File
~/Git/chatbot-llm-ecommerce/.venv/lib/python3.11/site-packages/langchain/chains/base.py:113,
in Chain.__call__(self, inputs, return_only_outputs)
107 self.callback_manager.on_chain_start(
108 {"name": self.__class__.__name__},
109 inputs,
110 verbose=self.verbose,
111 )
112 try:
--> 113 outputs = self._call(inputs)
114 except (KeyboardInterrupt, Exception) as e:
115 self.callback_manager.on_chain_error(e, verbose=self.verbose)
File
~/Git/chatbot-llm-ecommerce/.venv/lib/python3.11/site-packages/langchain/agents/agent.py:499,
in AgentExecutor._call(self, inputs)
497 # We now enter the agent loop (until it returns something).
498 while self._should_continue(iterations):
--> 499 next_step_output = self._take_next_step(
500 name_to_tool_map, color_mapping, inputs, intermediate_steps
501 )
502 if isinstance(next_step_output, AgentFinish):
503 return self._return(next_step_output, intermediate_steps)
File
~/Git/chatbot-llm-ecommerce/.venv/lib/python3.11/site-packages/langchain/agents/agent.py:409,
in AgentExecutor._take_next_step(self, name_to_tool_map, color_mapping,
inputs, intermediate_steps)
404 """Take a single step in the thought-action-observation loop.
405
406 Override this to take control of how the agent makes and acts on
choices.
407 """
408 # Call the LLM to see what to do.
--> 409 output = self.agent.plan(intermediate_steps, **inputs)
410 # If the tool chosen is the finishing tool, then we end and return.
411 if isinstance(output, AgentFinish):
File
~/Git/chatbot-llm-ecommerce/.venv/lib/python3.11/site-packages/langchain/agents/agent.py:105,
in Agent.plan(self, intermediate_steps, **kwargs)
94 """Given input, decided what to do.
95
96 Args:
(...)
102 Action specifying what tool to use.
103 """
104 full_inputs = self.get_full_inputs(intermediate_steps, **kwargs)
--> 105 action = self._get_next_action(full_inputs)
106 if action.tool == self.finish_tool_name:
107 return AgentFinish({"output": action.tool_input}, action.log)
File
~/Git/chatbot-llm-ecommerce/.venv/lib/python3.11/site-packages/langchain/agents/agent.py:67,
in Agent._get_next_action(self, full_inputs)
65 def _get_next_action(self, full_inputs: Dict[str, str]) ->
AgentAction:
66 full_output = self.llm_chain.predict(**full_inputs)
---> 67 parsed_output = self._extract_tool_and_input(full_output)
68 while parsed_output is None:
69 full_output = self._fix_text(full_output)
File
~/Git/chatbot-llm-ecommerce/.venv/lib/python3.11/site-packages/langchain/agents/conversational_chat/base.py:107,
in ConversationalChatAgent._extract_tool_and_input(self, llm_output)
105 return response["action"], response["action_input"]
106 except Exception:
--> 107 raise ValueError(f"Could not parse LLM output: {llm_output}")
ValueError: Could not parse LLM output: Here's a response using the
Product Search tool:
```json
{
"action": "Product Search",
"action_input": "pots for plants"
}
```
This will allow you to search for pots for your plants and find a
variety of options that are available for purchase. You can use this
information to choose the pots that best fit your needs and preferences.
</details>
Given that different models have very different latencies and pricings,
it's benefitial to pass the information about the model that generated
the response. Such information allows implementing custom callback
managers and track usage and price per model.
Addresses https://github.com/hwchase17/langchain/issues/1557.
This `BSHTMLLoader` document_loader loads an HTML document, extracts
text and adds the page title to the returned Document's metadata. The
loader uses the already installed bs4 package to extract both text
content and the page title.
Included in this PR is an example HTML file and an integration test that
tests against this file.
---------
Co-authored-by: Daniel Chalef <daniel.chalef@private.org>
```
class Joke(BaseModel):
setup: str = Field(description="question to set up a joke")
punchline: str = Field(description="answer to resolve the joke")
joke_query = "Tell me a joke."
# Or, an example with compound type fields.
#class FloatArray(BaseModel):
# values: List[float] = Field(description="list of floats")
#
#float_array_query = "Write out a few terms of fiboacci."
model = OpenAI(model_name='text-davinci-003', temperature=0.0)
parser = PydanticOutputParser(pydantic_object=Joke)
prompt = PromptTemplate(
template="Answer the user query.\n{format_instructions}\n{query}\n",
input_variables=["query"],
partial_variables={"format_instructions": parser.get_format_instructions()}
)
_input = prompt.format_prompt(query=joke_query)
print("Prompt:\n", _input.to_string())
output = model(_input.to_string())
print("Completion:\n", output)
parsed_output = parser.parse(output)
print("Parsed completion:\n", parsed_output)
```
```
Prompt:
Answer the user query.
The output should be formatted as a JSON instance that conforms to the JSON schema below. For example, the object {"foo": ["bar", "baz"]} conforms to the schema {"foo": {"description": "a list of strings field", "type": "string"}}.
Here is the output schema:
---
{"setup": {"description": "question to set up a joke", "type": "string"}, "punchline": {"description": "answer to resolve the joke", "type": "string"}}
---
Tell me a joke.
Completion:
{"setup": "Why don't scientists trust atoms?", "punchline": "Because they make up everything!"}
Parsed completion:
setup="Why don't scientists trust atoms?" punchline='Because they make up everything!'
```
Ofc, works only with LMs of sufficient capacity. DaVinci is reliable but
not always.
---------
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Hitting some dependency issues relating to this strict pinning. Unsure
of the knock-on effects, but wanted to propose this loosening down a
couple of versions.
PromptLayer now has support for [several different tracking
features.](https://magniv.notion.site/Track-4deee1b1f7a34c1680d085f82567dab9)
In order to use any of these features you need to have a request id
associated with the request.
In this PR we add a boolean argument called `return_pl_id` which will
add `pl_request_id` to the `generation_info` dictionary associated with
a generation.
We also updated the relevant documentation.
The basic vector store example started breaking because `Document`
required `not None` for metadata, but Chroma stores metadata as `None`
if none is provided. This creates a fallback which fixes the basic
tutorial
https://langchain.readthedocs.io/en/latest/modules/indexes/examples/vectorstores.html
Here is the error that was generated
```
Running Chroma using direct local API.
Using DuckDB in-memory for database. Data will be transient.
Traceback (most recent call last):
File "/Users/jeff/src/temp/langchainchroma/test.py", line 17, in <module>
docs = docsearch.similarity_search(query)
File "/Users/jeff/src/langchain/langchain/vectorstores/chroma.py", line 133, in similarity_search
docs_and_scores = self.similarity_search_with_score(query, k)
File "/Users/jeff/src/langchain/langchain/vectorstores/chroma.py", line 182, in similarity_search_with_score
return _results_to_docs_and_scores(results)
File "/Users/jeff/src/langchain/langchain/vectorstores/chroma.py", line 24, in _results_to_docs_and_scores
return [
File "/Users/jeff/src/langchain/langchain/vectorstores/chroma.py", line 27, in <listcomp>
(Document(page_content=result[0], metadata=result[1]), result[2])
File "pydantic/main.py", line 331, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 1 validation error for Document
metadata
none is not an allowed value (type=type_error.none.not_allowed)
Exiting: Cleaning up .chroma directory
```
add the state_of_the_union.txt file so that its easier to follow through
with the example.
---------
Co-authored-by: Jithin James <jjmachan@pop-os.localdomain>
This PR implements a basic metadata filtering mechanism similar to the
ones in Chroma and Pinecone. It still cannot express complex conditions,
as there are no operators, but some users requested to have that feature
available.
* Zapier Wrapper and Tools (implemented by Zapier Team)
* Zapier Toolkit, examples with mrkl agent
---------
Co-authored-by: Mike Knoop <mikeknoop@gmail.com>
Co-authored-by: Robert Lewis <robert.lewis@zapier.com>