Commit Graph

936 Commits (agent-lookup-tool-bad)
 

Author SHA1 Message Date
Harrison Chase 0ca1641b14
release 0.0.117 (#1819) 1 year ago
Harrison Chase d5b4393bb2
Harrison/llm math (#1808)
Co-authored-by: Vadym Barda <vadim.barda@gmail.com>
1 year ago
Bryan Helmig 7b6ff7fe00
Follow up to #1803 to remove dynamic docs route. (#1818)
The base docs are going to be more stable and familiar for folks.
Dynamic route is currently in flux.
1 year ago
Harrison Chase 76c7b1f677
Harrison/wandb (#1764)
Co-authored-by: Anish Shah <93145909+ash0ts@users.noreply.github.com>
1 year ago
Paul 5aa8ece211
Corrected small typo in error message. (#1791) 1 year ago
Harrison Chase f6d24d5740
fix bug with openai token count (#1806) 1 year ago
Harrison Chase b1c4480d7c
fix typing (#1807) 1 year ago
Daniel Chalef b6ba989f2f
Add request timeout to ChatOpenAI (#1798)
Add request_timeout field to ChatOpenAI. Defaults to 60s.

---------

Co-authored-by: Daniel Chalef <daniel.chalef@private.org>
1 year ago
Ankush Gola 04acda55ec
Don't use dynamic api endpoint for Zapier NLA (#1803)
From Robert "Right now the dynamic/ route for specifically the above
endpoints is acting on all providers a user has set up, not just the
provider for the supplied API key."
1 year ago
Harrison Chase 8e5c4ac867
bump version to 0.0.116 (#1788) 1 year ago
Aratako df8702fead
Small fix: Remove unused variable `summary_message_role` (#1789)
After the changes in #1783, `summary_message_role` is no longer used in
`ConversationSummaryBufferMemory`, so this PR removes it.
1 year ago
Harrison Chase d5d50c39e6
Harrison/azure embeddings (#1787)
Co-authored-by: Hemant <4627288+ghaccount@users.noreply.github.com>
1 year ago
Harrison Chase 1f18698b2a
Harrison/token buffer memory (#1786)
Co-authored-by: Aratako <127325395+Aratako@users.noreply.github.com>
1 year ago
Harrison Chase ef4945af6b
Harrison/chat token usage (#1785) 1 year ago
Harrison Chase 7de2ada3ea
Harrison/add source column (#1784)
Co-authored-by: Brian Graham <46691715+briangrahamww@users.noreply.github.com>
Co-authored-by: briangrahamww <brian.graham@ww.com>
1 year ago
Bernat Felip i Díaz 262d4cb9a8
Use embedding instead of embedding function in ElasticVectorStore (#1692)
While it might be a bit more restrictive, I find that using the
Embedding interface as an input for the vector store creation is better
than an embedding function because we can use bulk requests and possibly
the retry logic if needed.

I have seen that some vector store implementations use Embedding while
others use embedding function so I don't know what is the criteria to
have one or the other, in my opinion they should all just be Embedding
or have a way more complex embedding function that accepts multiple
texts instead of one by one.

---------

Co-authored-by: Bernat Felip <bernat.felip@rea.ch>
1 year ago
Harrison Chase 951c158106
Harrison/summary message rol (#1783)
Co-authored-by: Aratako <127325395+Aratako@users.noreply.github.com>
1 year ago
Bao Nguyen 85e4dd7fc3
Fix wrong prompt in refine chain (#1770)
I got this during testing 

```
ValueError: Missing some input keys: {'existing_answer'}
```

Upon review, the initial prompt should be `QUESTION_PROMPT_SELECTOR`.

Co-authored-by: Bao Nguyen <bnguyen@roku.com>
1 year ago
Harrison Chase b1b4a4065a
change chat default (#1782)
Resolves https://github.com/hwchase17/langchain/issues/1532, resolves
https://github.com/hwchase17/langchain/issues/1652.
1 year ago
Huang Chongdi 08f23c95d9
add encoding parameter to ObsidianLoader (#1752) 1 year ago
hitoshi44 3cf493b089
Fix Document & Expose StringPromptTemplate as a custom-prompt-template. (#1753)
Regarding [this
issue](https://github.com/hwchase17/langchain/issues/1754), the code in
the document [Creating a custom prompt
template](https://langchain.readthedocs.io/en/latest/modules/prompts/examples/custom_prompt_template.html)
is no longer functional and outdated.

To address this, I have made the following changes:

1. Updated the guide in the document to use `StringPromptTemplate`
instead of `BasePromptTemplate`.
2. Exposed `StringPromptTemplate` in `prompts/__init__.py` for easier
importing.
1 year ago
hitoshi44 e635c86145
Slightly modified the docstring in `BasePromptTemplate` and `StringPromptTemplate`. (#1755)
Regarding [this
issue](https://github.com/hwchase17/langchain/issues/1754),
`BasePromptTample` class docstring is a little outdated, thus it
requires new method `format_prompt` for now.

As such, I have made some modifications to the docstring to bring it up
to date.

I tried to adhere to the established document style, and would
appreciate you for taking a look at this PR.
1 year ago
Harrison Chase 779790167e
Harrison/add warning to openaichat (#1781) 1 year ago
Nils Durner 3161ced4bc
GPT-4 support (#1778) 1 year ago
hung_ng__ 3d6fcb85dc
Add load json prompt example (#1776)
Hi, I just want to add a PR on the prompt serialization examples of
loading from JSON so that it can contain the same as loading from YAML.
1 year ago
LeoGrin 3701b2901e
use namespace argument in Pinecone constructor (#1757)
Fix #1756

Use the `namespace` argument of `Pinecone.from_exisiting_index` to set
the default value of `namespace` for other methods. Leads to more
expected behavior and easier integration in chains.

For the test, I've added a line to delete and rebuild the
`langchain-demo` index at the beginning of the test. I'm not 100% sure
if it's a good idea but it makes the test reproducible.
1 year ago
Ben Gahtan 280cb4160d
Update tool.py (#1760)
Fixed typo that said the Wikipedia tool was using Wolfram Alpha (instead
of Wikipedia)
1 year ago
Kevin 80d8db5f60
Add service account support to Google Drive (#1761)
Having service account support in the drive document loader would be
nice.

This is already present in the youtube loader. 

cb646082ba/langchain/document_loaders/youtube.py (L76-L78)
1 year ago
Piyush Jain 1a8790d808
Corrects copyright year (#1762)
Corrected copyright year.
1 year ago
Eric Zhu 34840f3aee
AzureChatOpenAI for Azure Open AI's ChatGPT API (#1673)
Add support for Azure OpenAI's ChatGPT API, which uses ChatML markups to
format messages instead of objects.

Related issues: #1591, #1659
1 year ago
Harrison Chase 8685d53adc
querying tabular data (#1758) 1 year ago
Harrison Chase 2f6833d433
hotfix (#1742) 1 year ago
Harrison Chase dd90fd02d5
Harrison/move docs (#1741) 1 year ago
Harrison Chase 07766a69f3
move docs (#1740) 1 year ago
Harrison Chase aa854988bf
bump version to 114 (#1739) 1 year ago
Harrison Chase 96ebe98dc2
Harrison/latex splitter (#1738)
Co-authored-by: Aidan Holland <thehappydinoa@gmail.com>
Co-authored-by: Jan de Boer <44832123+Janldeboer@users.noreply.github.com>
1 year ago
Harrison Chase 45f05fc939
Harrison/blackboard loader (#1737)
Co-authored-by: Aidan Holland <thehappydinoa@gmail.com>
1 year ago
Vincent Liao cf9c3f54f7
docs: add docs link to agent toolkits (#1735)
New to Langchain, was a bit confused where I should find the toolkits
section when I'm at `agent/key_concepts` docs. I added a short link that
points to the how to section.
1 year ago
Merbin J Anselm fbc0c85b90
fix: agent json parser fails with text in suffix (#1734)
While testing out `VectorDBQA` as a `Tool` for one of the conversation,
I happened to get a response from LLM (OpenAI) like this

<code>
Could not parse LLM output: Here's a response using the Product Search
tool:

```json
{
    "action": "Product Search",
    "action_input": "pots for plants"
}
```

This will allow you to search for pots for your plants and find a
variety of options that are available for purchase. You can use this
information to choose the pots that best fit your needs and preferences.
</code>

i.e. The response had a text before & *after* the expected JSON, leading
to `JSONDecodeError`. It's fixed now, by removing text after '```' to
remove unwanted text.

The error I encountered in this Jupyter Notebook -
[link](https://github.com/anselm94/chatbot-llm-ecommerce/blob/main/chatcommerce.ipynb)

<details>
    <summary>Error encountered</summary>
    <code>
    

---------------------------------------------------------------------------
JSONDecodeError Traceback (most recent call last)
File
~/Git/chatbot-llm-ecommerce/.venv/lib/python3.11/site-packages/langchain/agents/conversational_chat/base.py:104,
in ConversationalChatAgent._extract_tool_and_input(self, llm_output)
        103 try:
    --> 104     response = self.output_parser.parse(llm_output)
        105     return response["action"], response["action_input"]

File
~/Git/chatbot-llm-ecommerce/.venv/lib/python3.11/site-packages/langchain/agents/conversational_chat/base.py:49,
in AgentOutputParser.parse(self, text)
        48 cleaned_output = cleaned_output.strip()
    ---> 49 response = json.loads(cleaned_output)
50 return {"action": response["action"], "action_input":
response["action_input"]}

File
/opt/homebrew/Cellar/python@3.11/3.11.2_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/json/__init__.py:346,
in loads(s, cls, object_hook, parse_float, parse_int, parse_constant,
object_pairs_hook, **kw)
        343 if (cls is None and object_hook is None and
        344         parse_int is None and parse_float is None and
345 parse_constant is None and object_pairs_hook is None and not kw):
    --> 346     return _default_decoder.decode(s)
        347 if cls is None:

File
/opt/homebrew/Cellar/python@3.11/3.11.2_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/json/decoder.py:340,
in JSONDecoder.decode(self, s, _w)
        339 if end != len(s):
    --> 340     raise JSONDecodeError("Extra data", s, end)
        341 return obj

    JSONDecodeError: Extra data: line 5 column 1 (char 74)

    During handling of the above exception, another exception occurred:

ValueError Traceback (most recent call last)
    Cell In[22], line 1
    ----> 1 ask_ai.run("Yes. I need pots for my plants")

File
~/Git/chatbot-llm-ecommerce/.venv/lib/python3.11/site-packages/langchain/chains/base.py:213,
in Chain.run(self, *args, **kwargs)
        211     if len(args) != 1:
212 raise ValueError("`run` supports only one positional argument.")
    --> 213     return self(args[0])[self.output_keys[0]]
        215 if kwargs and not args:
        216     return self(kwargs)[self.output_keys[0]]

File
~/Git/chatbot-llm-ecommerce/.venv/lib/python3.11/site-packages/langchain/chains/base.py:116,
in Chain.__call__(self, inputs, return_only_outputs)
        114 except (KeyboardInterrupt, Exception) as e:
115 self.callback_manager.on_chain_error(e, verbose=self.verbose)
    --> 116     raise e
117 self.callback_manager.on_chain_end(outputs, verbose=self.verbose)
118 return self.prep_outputs(inputs, outputs, return_only_outputs)

File
~/Git/chatbot-llm-ecommerce/.venv/lib/python3.11/site-packages/langchain/chains/base.py:113,
in Chain.__call__(self, inputs, return_only_outputs)
        107 self.callback_manager.on_chain_start(
        108     {"name": self.__class__.__name__},
        109     inputs,
        110     verbose=self.verbose,
        111 )
        112 try:
    --> 113     outputs = self._call(inputs)
        114 except (KeyboardInterrupt, Exception) as e:
115 self.callback_manager.on_chain_error(e, verbose=self.verbose)

File
~/Git/chatbot-llm-ecommerce/.venv/lib/python3.11/site-packages/langchain/agents/agent.py:499,
in AgentExecutor._call(self, inputs)
        497 # We now enter the agent loop (until it returns something).
        498 while self._should_continue(iterations):
    --> 499     next_step_output = self._take_next_step(
500 name_to_tool_map, color_mapping, inputs, intermediate_steps
        501     )
        502     if isinstance(next_step_output, AgentFinish):
503 return self._return(next_step_output, intermediate_steps)

File
~/Git/chatbot-llm-ecommerce/.venv/lib/python3.11/site-packages/langchain/agents/agent.py:409,
in AgentExecutor._take_next_step(self, name_to_tool_map, color_mapping,
inputs, intermediate_steps)
404 """Take a single step in the thought-action-observation loop.
        405
406 Override this to take control of how the agent makes and acts on
choices.
        407 """
        408 # Call the LLM to see what to do.
    --> 409 output = self.agent.plan(intermediate_steps, **inputs)
410 # If the tool chosen is the finishing tool, then we end and return.
        411 if isinstance(output, AgentFinish):

File
~/Git/chatbot-llm-ecommerce/.venv/lib/python3.11/site-packages/langchain/agents/agent.py:105,
in Agent.plan(self, intermediate_steps, **kwargs)
        94 """Given input, decided what to do.
        95
        96 Args:
    (...)
        102     Action specifying what tool to use.
        103 """
104 full_inputs = self.get_full_inputs(intermediate_steps, **kwargs)
    --> 105 action = self._get_next_action(full_inputs)
        106 if action.tool == self.finish_tool_name:
107 return AgentFinish({"output": action.tool_input}, action.log)

File
~/Git/chatbot-llm-ecommerce/.venv/lib/python3.11/site-packages/langchain/agents/agent.py:67,
in Agent._get_next_action(self, full_inputs)
65 def _get_next_action(self, full_inputs: Dict[str, str]) ->
AgentAction:
        66     full_output = self.llm_chain.predict(**full_inputs)
---> 67 parsed_output = self._extract_tool_and_input(full_output)
        68     while parsed_output is None:
        69         full_output = self._fix_text(full_output)

File
~/Git/chatbot-llm-ecommerce/.venv/lib/python3.11/site-packages/langchain/agents/conversational_chat/base.py:107,
in ConversationalChatAgent._extract_tool_and_input(self, llm_output)
        105     return response["action"], response["action_input"]
        106 except Exception:
--> 107 raise ValueError(f"Could not parse LLM output: {llm_output}")

ValueError: Could not parse LLM output: Here's a response using the
Product Search tool:

    ```json
    {
        "action": "Product Search",
        "action_input": "pots for plants"
    }
    ```

This will allow you to search for pots for your plants and find a
variety of options that are available for purchase. You can use this
information to choose the pots that best fit your needs and preferences.

</details>
1 year ago
Harrison Chase 276940fd9b
Harrison/official method (#1728)
Co-authored-by: Aratako <127325395+Aratako@users.noreply.github.com>
1 year ago
Piyush Jain cdff6c8181
Sagemaker Endpoint LLM (#1686)
Updates #965

---------

Co-authored-by: Nimisha Mehta <116048415+nimimeht@users.noreply.github.com>
Co-authored-by: Harrison Chase <harrisonchase@Harrisons-MBP.attlocal.net>
1 year ago
alekhyablue cd45adbea2
adding new agent types in comments (#1711) 1 year ago
Mario Kostelac aff44d0a98
(OpenAI) Add model_name to LLMResult.llm_output (#1713)
Given that different models have very different latencies and pricings,
it's benefitial to pass the information about the model that generated
the response. Such information allows implementing custom callback
managers and track usage and price per model.

Addresses https://github.com/hwchase17/langchain/issues/1557.
1 year ago
libra 8a95fdaee1
Fix all the bug in init Tool in docs (#1725)
Fix all the example in the docs when init `Tool`

Test by render with jupyter
1 year ago
Alexandros Mavrogiannis 5d8dc83ede
Bump duckdb-engine to 0.7.0 (#1726)
Resolves https://github.com/hwchase17/langchain/issues/1272
Resolves https://github.com/hwchase17/langchain/issues/1578
1 year ago
Daniel Chalef b157e0c1c3
Add HTML document_loader that includes page title metadata (#1720)
This `BSHTMLLoader` document_loader loads an HTML document, extracts
text and adds the page title to the returned Document's metadata. The
loader uses the already installed bs4 package to extract both text
content and the page title.

Included in this PR is an example HTML file and an integration test that
tests against this file.

---------

Co-authored-by: Daniel Chalef <daniel.chalef@private.org>
1 year ago
Harrison Chase 40e9488055
fix async in agent (#1723) 1 year ago
jerwelborn 55efbb8a7e
pydantic/json parsing (#1722)
```
class Joke(BaseModel):
    setup: str = Field(description="question to set up a joke")
    punchline: str = Field(description="answer to resolve the joke")

joke_query = "Tell me a joke."

# Or, an example with compound type fields.
#class FloatArray(BaseModel):
#    values: List[float] = Field(description="list of floats")
#
#float_array_query = "Write out a few terms of fiboacci."

model = OpenAI(model_name='text-davinci-003', temperature=0.0)
parser = PydanticOutputParser(pydantic_object=Joke)
prompt = PromptTemplate(
    template="Answer the user query.\n{format_instructions}\n{query}\n",
    input_variables=["query"],
    partial_variables={"format_instructions": parser.get_format_instructions()}
)

_input = prompt.format_prompt(query=joke_query)
print("Prompt:\n", _input.to_string())
output = model(_input.to_string())
print("Completion:\n", output)
parsed_output = parser.parse(output)
print("Parsed completion:\n", parsed_output)
```

```
Prompt:
 Answer the user query.
The output should be formatted as a JSON instance that conforms to the JSON schema below.  For example, the object {"foo":  ["bar", "baz"]} conforms to the schema {"foo": {"description": "a list of strings field", "type": "string"}}.

Here is the output schema:
---
{"setup": {"description": "question to set up a joke", "type": "string"}, "punchline": {"description": "answer to resolve the joke", "type": "string"}}
---

Tell me a joke.

Completion:
 {"setup": "Why don't scientists trust atoms?", "punchline": "Because they make up everything!"}

Parsed completion:
 setup="Why don't scientists trust atoms?" punchline='Because they make up everything!'
```

Ofc, works only with LMs of sufficient capacity. DaVinci is reliable but
not always.

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
1 year ago
Alex Strick van Linschoten d6bbf395af
Loosen PyYAML dependency (#1698)
Hitting some dependency issues relating to this strict pinning. Unsure
of the knock-on effects, but wanted to propose this loosening down a
couple of versions.
1 year ago
Jonathan Pedoeem 606605925d
Adding ability to `return_pl_id` to all PromptLayer Models in LangChain (#1699)
PromptLayer now has support for [several different tracking
features.](https://magniv.notion.site/Track-4deee1b1f7a34c1680d085f82567dab9)
In order to use any of these features you need to have a request id
associated with the request.

In this PR we add a boolean argument called `return_pl_id` which will
add `pl_request_id` to the `generation_info` dictionary associated with
a generation.

We also updated the relevant documentation.
1 year ago