Commit Graph

485 Commits

Author SHA1 Message Date
Harrison Chase
8df6b68093
fix length based example selector (#862) 2023-02-02 22:06:56 -08:00
Harrison Chase
3f48eed5bd
Harrison/milvus (#856)
Signed-off-by: Filip Haltmayer <filip.haltmayer@zilliz.com>
Signed-off-by: Frank Liu <frank.liu@zilliz.com>
Co-authored-by: Filip Haltmayer <81822489+filip-halt@users.noreply.github.com>
Co-authored-by: Frank Liu <frank@frankzliu.com>
2023-02-02 22:05:47 -08:00
Ankush Gola
933441cc52
Add retry to OpenAI llm (#849)
add ability to retry when certain exceptions are raised by
`openai.Completions.create`

Test plan: ran all OpenAI integration tests.
2023-02-02 19:56:26 -08:00
kahkeng
4a8f5cdf4b
Add alternative token-based text splitter (#816)
This does not involve a separator, and will naively chunk input text at
the appropriate boundaries in token space.

This is helpful if we have strict token length limits that we need to
strictly follow the specified chunk size, and we can't use aggressive
separators like spaces to guarantee the absence of long strings.

CharacterTextSplitter will let these strings through without splitting
them, which could cause overflow errors downstream.

Splitting at arbitrary token boundaries is not ideal but is hopefully
mitigated by having a decent overlap quantity. Also this results in
chunks which has exact number of tokens desired, instead of sometimes
overcounting if we concatenate shorter strings.

Potentially also helps with #528.
2023-02-02 19:55:13 -08:00
Harrison Chase
523ad2e6bd
vercel deployments (#850) 2023-02-02 19:54:09 -08:00
Harrison Chase
fc0cfd7d1f
docs (#848) 2023-02-02 11:35:36 -08:00
Harrison Chase
4d32441b86
bump version to 0076 (#847) 2023-02-02 10:05:39 -08:00
Harrison Chase
23d5f64bda
Harrison/ngram example (#846)
Co-authored-by: Sean Spriggens <ssprigge@syr.edu>
2023-02-02 09:44:42 -08:00
Harrison Chase
0de55048b7
return code for pal (#844) 2023-02-02 08:47:20 -08:00
Harrison Chase
d564308e0f
rfc: instruct embeddings (#811)
Co-authored-by: seanaedmiston <seane999@gmail.com>
2023-02-02 08:44:02 -08:00
Nick Furlotte
576609e665
Update PAL to allow passing local and global context to PythonREPL (#774)
Passing additional variables to the python environment can be useful for
example if you want to generate code to analyze a dataset.

I also added a tracker for the executed code - `code_history`.
2023-02-02 08:34:23 -08:00
Harrison Chase
3f952eb597
add from string method (#820) 2023-02-02 08:23:54 -08:00
Ikko Eltociear Ashimine
ba26a879e0
Fix typo in crawler.py (#842)
seperator -> separator
2023-02-02 08:23:38 -08:00
Eli Mernit
bfabd1d5c0
Added new deployment template (#835)
This PR introduces a new template for deploying LangChain apps as web
endpoints. It includes template code, and links to a detailed
code-walkthrough.
2023-02-01 23:38:36 -08:00
Jonas Ehrenstein
f3508228df
Minor fix for google search util: it's uncertain if "snippet" in results exists (#830)
The results from Google search may not always contain a "snippet". 

Example:
`{'kind': 'customsearch#result', 'title': 'FEMA Flood Map', 'htmlTitle':
'FEMA Flood Map', 'link': 'https://msc.fema.gov/portal/home',
'displayLink': 'msc.fema.gov', 'formattedUrl':
'https://msc.fema.gov/portal/home', 'htmlFormattedUrl':
'https://<b>msc</b>.fema.gov/portal/home'}`

This will cause a KeyError at line 99
`snippets.append(result["snippet"])`.
2023-02-01 23:37:52 -08:00
Zach Schillaci
b4eb043b81
Minor fix to SQLDatabaseChain doc (#826) 2023-02-01 23:37:38 -08:00
Istora Mandiri
06438794e1
Fix typo in textsplitter docs (#825) 2023-02-01 23:32:35 -08:00
Raza Habib
9f8e05ffd4
Update __init__.py (#827)
Remove duplicate APIChain
2023-02-01 23:31:38 -08:00
Harrison Chase
b0d560be56
add to gallery (#824) 2023-02-01 07:10:15 -08:00
Johanna Appel
ebea40ce86
Add 'truncate' parameter for CohereEmbeddings (#798)
Currently, the 'truncate' parameter of the cohere API is not supported.

This means that by default, if trying to generate and embedding that is
too big, the call will just fail with an error (which is frustrating if
using this embedding source e.g. with GPT-Index, because it's hard to
handle it properly when generating a lot of embeddings).
With the parameter, one can decide to either truncate the START or END
of the text to fit the max token length and still generate an embedding
without throwing the error.

In this PR, I added this parameter to the class.

_Arguably, there should be a better way to handle this error, e.g. by
optionally calling a function or so that gets triggered when the token
limit is reached and can split the document or some such. Especially in
the use case with GPT-Index, its often hard to estimate the token counts
for each document and I'd rather sort out the troublemakers or simply
split them than interrupting the whole execution.
Thoughts?_

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-02-01 07:09:03 -08:00
Harrison Chase
b9045f7e0d
bump version to 0075 (#819) 2023-01-31 00:18:32 -08:00
Harrison Chase
7b4882a2f4
Harrison/tf embeddings (#817)
Co-authored-by: Ryohei Kuroki <10434946+yakigac@users.noreply.github.com>
2023-01-31 00:00:08 -08:00
Harrison Chase
5d4b6e4d4e
conversational agent fix (#818) 2023-01-30 23:59:55 -08:00
Harrison Chase
94ae126747
return sql intermediate steps (#792) 2023-01-30 15:10:48 -08:00
bair82
ae5695ad32
Update cohere.py (#795)
When stop tokens are set in Cohere LLM constructor, they are currently
not stripped from the response, and they should be stripped
2023-01-30 14:55:44 -08:00
Johanna Appel
cacf4091c0
Fix documentation for 'model' parameter in CohereEmbeddings (#797)
Currently, the class parameter 'model_name' of the CohereEmbeddings
class is not supported, but 'model' is. The class documentation is
inconsistent with this, though, so I propose to either fix the
documentation (this PR right now) or fix the parameter.

It will create the following error:
```
ValidationError: 1 validation error for CohereEmbeddings
model_name
  extra fields not permitted (type=value_error.extra)
```
2023-01-30 14:55:08 -08:00
Jason Liu
54f9e4287f
Pass kwargs from initialize_agent into agent classmethod (#799)
# Problem
I noticed that in order to change the prefix of the prompt in the
`zero-shot-react-description` agent
we had to dig around to subset strings deep into the agent's attributes.
It requires the user to inspect a long chain of attributes and classes.

`initialize_agent -> AgentExecutor -> Agent -> LLMChain -> Prompt from
Agent.create_prompt`

``` python
agent = initialize_agent(
    tools=tools,
    llm=fake_llm,
    agent="zero-shot-react-description"
)
prompt_str = agent.agent.llm_chain.prompt.template
new_prompt_str = change_prefix(prompt_str)
agent.agent.llm_chain.prompt.template = new_prompt_str
```

# Implemented Solution

`initialize_agent` accepts `**kwargs` but passes it to `AgentExecutor`
but not `ZeroShotAgent`, by simply giving the kwargs to the agent class
methods we can support changing the prefix and suffix for one agent
while allowing future agents to take advantage of `initialize_agent`.


```
agent = initialize_agent(
    tools=tools,
    llm=fake_llm,
    agent="zero-shot-react-description",
    agent_kwargs={"prefix": prefix, "suffix": suffix}
)
```

To be fair, this was before finding docs around custom agents here:
https://langchain.readthedocs.io/en/latest/modules/agents/examples/custom_agent.html?highlight=custom%20#custom-llmchain
but i find that my use case just needed to change the prefix a little.


# Changes

* Pass kwargs to Agent class method
* Added a test to check suffix and prefix

---------

Co-authored-by: Jason Liu <jason@jxnl.coA>
2023-01-30 14:54:09 -08:00
Roger Zurawicki
c331009440
docs: Update langchain link to PyPI (#800)
Simple one-line fix

CONTRIBUTING used a link that pointed to the `ruff` project.
2023-01-30 14:53:16 -08:00
Roy Williams
6086292252
Centralize logic for loading from LangChainHub, add ability to pin dependencies (#805)
It's generally considered to be a good practice to pin dependencies to
prevent surprise breakages when a new version of a dependency is
released. This commit adds the ability to pin dependencies when loading
from LangChainHub.

Centralizing this logic and using urllib fixes an issue identified by
some windows users highlighted in this video -
https://youtu.be/aJ6IQUh8MLQ?t=537
2023-01-30 14:52:17 -08:00
Harrison Chase
b3916f74a7
enable mmr search (#807) 2023-01-30 14:48:24 -08:00
Harrison Chase
f46f1d28af
expose memory key name (#808) 2023-01-30 14:48:12 -08:00
Harrison Chase
7728a848d0
Harrison/tracing docs (#806)
Co-authored-by: Ankush Gola <9536492+agola11@users.noreply.github.com>
2023-01-29 20:49:35 -08:00
Harrison Chase
f3da4dc6ba
Harrison/tracing docs (#804)
Co-authored-by: Ankush Gola <9536492+agola11@users.noreply.github.com>
2023-01-29 20:24:22 -08:00
Harrison Chase
ae1b589f60
Harrison/add link for support (#794) 2023-01-28 22:53:04 -08:00
Harrison Chase
6a20f07f0d
add link for support (#793) 2023-01-28 22:44:23 -08:00
Harrison Chase
fb2d7afe71
bump version to 0074 (#791) 2023-01-28 18:50:22 -08:00
Harrison Chase
1ad7973cc6
Harrison/tool decorator (#790)
Co-authored-by: Jason Liu <jxnl@users.noreply.github.com>
Co-authored-by: Jason Liu <jason@jxnl.coA>
2023-01-28 18:26:24 -08:00
Harrison Chase
5f73d06502
Harrison/fix caching bug (#788)
Co-authored-by: thepok <richterthepok@yahoo.de>
2023-01-28 14:24:30 -08:00
Harrison Chase
248c297f1b
Sample row in table info for SQLDatabase (#769) (#782)
The agents usually benefit from understanding what the data looks like
to be able to filter effectively. Sending just one row in the table info
allows the agent to understand the data before querying and get better
results.

---------

Co-authored-by: Francisco Ingham <>

---------

Co-authored-by: Francisco Ingham <fpingham@gmail.com>
2023-01-28 13:37:07 -08:00
Francisco Ingham
213c2e33e5
Sql prompt improvement (#787)
Co-authored-by: Francisco Ingham <>
2023-01-28 13:34:15 -08:00
Harrison Chase
2e0219cac0
fixing bash util (#779) 2023-01-28 08:26:29 -08:00
Harrison Chase
966611bbfa
add model kwargs to handle stop token from cohere (#773) 2023-01-28 08:24:55 -08:00
Harrison Chase
7198a1cb22
Harrison/refactor agent (#781)
Co-authored-by: Amos Ng <me@amos.ng>
2023-01-28 08:24:13 -08:00
Harrison Chase
5bb2952860
Harrison/hf pipeline (#780)
Co-authored-by: Parth Chadha <parth29@gmail.com>
2023-01-28 08:23:59 -08:00
Harrison Chase
c658f0aed3
Harrison/add to search (#778)
Co-authored-by: Enrico Shippole <enricoship@gmail.com>
2023-01-28 08:06:00 -08:00
Bill Kish
309d86e339
increase text-davinci-003 contextsize to 4097 (#748)
text-davinci-003 supports a context size of 4097 tokens so return 4097
instead of 4000 in modelname_to_contextsize() for text-davinci-003

Co-authored-by: Bill Kish <bill@cogniac.co>
2023-01-28 08:05:35 -08:00
Amos Ng
6ad360bdef
Suggestions for better debugging (#765)
Please feel free to disregard any changes you disagree with
2023-01-28 08:05:20 -08:00
Albert Ziegler
5198d6f541
Add missing verb (#768)
Mini drive-by PR:

I came across this sentence in a stack trace for an error I had, and it
confused me because the verb I missing. So I added the verb.
2023-01-28 07:26:27 -08:00
Harrison Chase
a5d003f0c9
update notebook and make backwards compatible (#772) 2023-01-28 07:23:04 -08:00
Harrison Chase
924b7ecf89
pass kwargs and bump (#770) 2023-01-27 08:56:36 -08:00