Adds Google Search integration with [Serper](https://serper.dev) a
low-cost alternative to SerpAPI (10x cheaper + generous free tier).
Includes documentation, tests and examples. Hopefully I am not missing
anything.
Developers can sign up for a free account at
[serper.dev](https://serper.dev) and obtain an api key.
## Usage
```python
from langchain.utilities import GoogleSerperAPIWrapper
from langchain.llms.openai import OpenAI
from langchain.agents import initialize_agent, Tool
import os
os.environ["SERPER_API_KEY"] = ""
os.environ['OPENAI_API_KEY'] = ""
llm = OpenAI(temperature=0)
search = GoogleSerperAPIWrapper()
tools = [
Tool(
name="Intermediate Answer",
func=search.run
)
]
self_ask_with_search = initialize_agent(tools, llm, agent="self-ask-with-search", verbose=True)
self_ask_with_search.run("What is the hometown of the reigning men's U.S. Open champion?")
```
### Output
```
Entering new AgentExecutor chain...
Yes.
Follow up: Who is the reigning men's U.S. Open champion?
Intermediate answer: Current champions Carlos Alcaraz, 2022 men's singles champion.
Follow up: Where is Carlos Alcaraz from?
Intermediate answer: El Palmar, Spain
So the final answer is: El Palmar, Spain
> Finished chain.
'El Palmar, Spain'
```
This PR updates the usage instructions for PromptLayerOpenAI in
Langchain's documentation. The updated instructions provide more detail
and conform better to the style of other LLM integration documentation
pages.
No code changes were made in this PR, only improvements to the
documentation. This update will make it easier for users to understand
how to use `PromptLayerOpenAI`
Currently the chain is getting the column names and types on the one
side and the example rows on the other. It is easier for the llm to read
the table information if the column name and examples are shown together
so that it can easily understand to which columns do the examples refer
to. For an instantiation of this, please refer to the changes in the
`sqlite.ipynb` notebook.
Also changed `eval` for `ast.literal_eval` when interpreting the results
from the sample row query since it is a better practice.
---------
Co-authored-by: Francisco Ingham <>
---------
Co-authored-by: Francisco Ingham <fpingham@gmail.com>
The provided example uses the default `max_length` of `20` tokens, which
leads to the example generation getting cut off. 20 tokens is way too
short to show CoT reasoning, so I boosted it to `64`.
Without knowing HF's API well, it can be hard to figure out just where
those `model_kwargs` come from, and `max_length` is a super critical
one.
Co-authored-by: Andrew White <white.d.andrew@gmail.com>
Co-authored-by: Harrison Chase <harrisonchase@Harrisons-MBP.attlocal.net>
Co-authored-by: Peng Qu <82029664+pengqu123@users.noreply.github.com>
Supporting asyncio in langchain primitives allows for users to run them
concurrently and creates more seamless integration with
asyncio-supported frameworks (FastAPI, etc.)
Summary of changes:
**LLM**
* Add `agenerate` and `_agenerate`
* Implement in OpenAI by leveraging `client.Completions.acreate`
**Chain**
* Add `arun`, `acall`, `_acall`
* Implement them in `LLMChain` and `LLMMathChain` for now
**Agent**
* Refactor and leverage async chain and llm methods
* Add ability for `Tools` to contain async coroutine
* Implement async SerpaPI `arun`
Create demo notebook.
Open questions:
* Should all the async stuff go in separate classes? I've seen both
patterns (keeping the same class and having async and sync methods vs.
having class separation)
PR to fix outdated environment details in the docs, see issue #897
I added code comments as pointers to where users go to get API keys, and
where they can find the relevant environment variable.
Signed-off-by: Filip Haltmayer <filip.haltmayer@zilliz.com>
Signed-off-by: Frank Liu <frank.liu@zilliz.com>
Co-authored-by: Filip Haltmayer <81822489+filip-halt@users.noreply.github.com>
Co-authored-by: Frank Liu <frank@frankzliu.com>
It's generally considered to be a good practice to pin dependencies to
prevent surprise breakages when a new version of a dependency is
released. This commit adds the ability to pin dependencies when loading
from LangChainHub.
Centralizing this logic and using urllib fixes an issue identified by
some windows users highlighted in this video -
https://youtu.be/aJ6IQUh8MLQ?t=537
The agents usually benefit from understanding what the data looks like
to be able to filter effectively. Sending just one row in the table info
allows the agent to understand the data before querying and get better
results.
---------
Co-authored-by: Francisco Ingham <>
---------
Co-authored-by: Francisco Ingham <fpingham@gmail.com>
On the [Getting Started
page](https://langchain.readthedocs.io/en/latest/modules/prompts/getting_started.html)
for prompt templates, I believe the very last example
```python
print(dynamic_prompt.format(adjective=long_string))
```
should actually be
```python
print(dynamic_prompt.format(input=long_string))
```
The existing example produces `KeyError: 'input'` as expected
***
On the [Create a custom prompt
template](https://langchain.readthedocs.io/en/latest/modules/prompts/examples/custom_prompt_template.html#id1)
page, I believe the line
```python
Function Name: {kwargs["function_name"]}
```
should actually be
```python
Function Name: {kwargs["function_name"].__name__}
```
The existing example produces the prompt:
```
Given the function name and source code, generate an English language explanation of the function.
Function Name: <function get_source_code at 0x7f907bc0e0e0>
Source Code:
def get_source_code(function_name):
# Get the source code of the function
return inspect.getsource(function_name)
Explanation:
```
***
On the [Example
Selectors](https://langchain.readthedocs.io/en/latest/modules/prompts/examples/example_selectors.html)
page, the first example does not define `example_prompt`, which is also
subtly different from previous example prompts used. For user
convenience, I suggest including
```python
example_prompt = PromptTemplate(
input_variables=["input", "output"],
template="Input: {input}\nOutput: {output}",
)
```
in the code to be copy-pasted
tl;dr: input -> word, output -> antonym, rename to dynamic_prompt
consistently
The provided code in this example doesn't run, because the keys are
`word` and `antonym`, rather than `input` and `output`.
Also, the `ExampleSelector`-based prompt is named `few_shot_prompt` when
defined and `dynamic_prompt` in the follow-up example. The former name
is less descriptive and collides with an earlier example, so I opted for
the latter.
Thanks for making a really cool library!
For using Azure OpenAI API, we need to set multiple env vars. But as can
be seen in openai package
[here](48b69293a3/openai/__init__.py (L35)),
the env var for setting base url is named `OPENAI_API_BASE` and not
`OPENAI_API_BASE_URL`. This PR fixes that part in the documentation.
I originally had only modified the `from_llm` to include the prompt but
I realized that if the prompt keys used on the custom prompt didn't
match the default prompt, it wouldn't work because of how `apply` works.
So I made some changes to the evaluate method to check if the prompt is
the default and if not, it will check if the input keys are the same as
the prompt key and update the inputs appropriately.
Let me know if there is a better way to do this.
Also added the custom prompt to the QA eval notebook.
add a chain that applies a prompt to all inputs and then returns not
only an answer but scores it
add examples for question answering and question answering with sources
Small quick fix:
Suggest making the order of the menu the same as it is written on the
page (Getting Started -> Key Concepts). Before the menu order was not
the same as it was on the page. Not sure if this is the only place the
menu is affected.
Mismatch is found here:
https://langchain.readthedocs.io/en/latest/modules/llms.html
- Add support for local build and linkchecking of docs
- Add GitHub Action to automatically check links before prior to
publication
- Minor reformat of Contributing readme
- Fix existing broken links
Co-authored-by: Hunter Gerlach <hunter@huntergerlach.com>
Co-authored-by: Hunter Gerlach <HunterGerlach@users.noreply.github.com>
Co-authored-by: Hunter Gerlach <hunter@huntergerlach.com>
I noticed (after publication) that the getting_started link on the main
page was borked. This should fix it.
Co-authored-by: Hunter Gerlach <hunter@huntergerlach.com>
Big docs refactor! Motivation is to make it easier for people to find
resources they are looking for. To accomplish this, there are now three
main sections:
- Getting Started: steps for getting started, walking through most core
functionality
- Modules: these are different modules of functionality that langchain
provides. Each part here has a "getting started", "how to", "key
concepts" and "reference" section (except in a few select cases where it
didnt easily fit).
- Use Cases: this is to separate use cases (like summarization, question
answering, evaluation, etc) from the modules, and provide a different
entry point to the code base.
There is also a full reference section, as well as extra resources
(glossary, gallery, etc)
Co-authored-by: Shreya Rajpal <ShreyaR@users.noreply.github.com>
I was honored by the twitter mention, so used PyCharm to try and... help
docs even a little bit.
Mostly typo-s and correct spellings.
PyCharm really complains about "really good" being used all the time and
recommended alternative wordings haha
Hi! This PR adds support for the Azure OpenAI service to LangChain.
I've tried to follow the contributing guidelines.
Co-authored-by: Keiji Kanazawa <{ID}+{username}@users.noreply.github.com>
Created a generic SQLAlchemyCache class to plug any database supported
by SQAlchemy. (I am using Postgres).
I also based the class SQLiteCache class on this class SQLAlchemyCache.
As a side note, I'm questioning the need for two distinct class
LLMCache, FullLLMCache. Shouldn't we merge both ?
Nothing of substance was changed. I simply corrected a few minor errors
that could slow down the reader.
Co-authored-by: Hunter Gerlach <hunter@huntergerlach.com>
Love the project, a ton of fun!
I think the PR is pretty self-explanatory, happy to make any changes! I
am working on using it in an `LLMBashChain` and may update as that
progresses.
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
With the original prompt, the chain keeps trying to jump straight to
doing math directly, without first looking up ages. With this two-part
question, it behaves more as intended:
> Entering new ZeroShotAgent chain...
How old is Olivia Wilde's boyfriend? What is that number raised to the
0.23 power?
Thought: I need to find out how old Olivia Wilde's boyfriend is, and
then use a calculator to calculate the power.
Action: Search
Action Input: Olivia Wilde's boyfriend age
Observation: While Wilde, 37, and Styles, 27, have both kept a low
profile when it comes to talking about their relationship, Wilde did
address their ...
Thought: Olivia Wilde's boyfriend is 27 years old.
Action: Calculator
Action Input: 27^0.23
> Entering new LLMMathChain chain...
27^0.23
```python
import math
print(math.pow(27, 0.23))
```
Answer: 2.1340945944237553
> Finished LLMMathChain chain.
Observation: Answer: 2.1340945944237553
Thought: I now know the final answer.
Final Answer: 2.1340945944237553
> Finished ZeroShotAgent chain.
Add MemoryChain and ConversationChain as chains that take a docstore in
addition to the prompt, and use the docstore to stuff context into the
prompt. This can be used to have an ongoing conversation with a chatbot.
Probably needs a bit of refactoring for code quality
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Also updated docs, and noticed an issue with the add_texts method on
VectorStores that I had missed before -- the metadatas arg should be
required to match the classmethod which initializes the VectorStores
(the add_example methods break otherwise in the ExampleSelectors)
Without the print on the `llm` call, the new user sees no visible effect
when just getting started. The assumption here is the new user is
running this in a new sandbox script file or repl via copy-paste.