Commit Graph

483 Commits (main)

Author SHA1 Message Date
Harrison Chase 88bebb4caa
Harrison/llm integrations (#1039)
Co-authored-by: jped <jonathanped@gmail.com>
Co-authored-by: Justin Torre <justintorre75@gmail.com>
Co-authored-by: Ivan Vendrov <ivan@anthropic.com>
1 year ago
Harrison Chase ec727bf166
Align table info (#999) (#1034)
Currently the chain is getting the column names and types on the one
side and the example rows on the other. It is easier for the llm to read
the table information if the column name and examples are shown together
so that it can easily understand to which columns do the examples refer
to. For an instantiation of this, please refer to the changes in the
`sqlite.ipynb` notebook.

Also changed `eval` for `ast.literal_eval` when interpreting the results
from the sample row query since it is a better practice.

---------

Co-authored-by: Francisco Ingham <>

---------

Co-authored-by: Francisco Ingham <fpingham@gmail.com>
1 year ago
Harrison Chase 8c45f06d58
Harrison/standarize prompt loading (#1036)
Co-authored-by: Ibis Prevedello <ibiscp@gmail.com>
1 year ago
Enrico Shippole f30dcc6359
Add GooseAI, CerebriumAI, Petals, ForefrontAI (#981)
Add GooseAI, CerebriumAI, Petals, ForefrontAI
1 year ago
Anton Troynikov d43d430d86
Chroma persistence (#1028)
This PR adds persistence to the Chroma vector store.

Users can supply a `persist_directory` with any of the `Chroma` creation
methods. If supplied, the store will be automatically persisted at that
directory.

If a user creates a new `Chroma` instance with the same persistence
directory, it will get loaded up automatically. If they use `from_texts`
or `from_documents` in this way, the documents will be loaded into the
existing store.

There is the chance of some funky behavior if the user passes a
different embedding function from the one used to create the collection
- we will make this easier in future updates. For now, we log a warning.
1 year ago
Oliver Klingefjord 20889205e8
Added retry for openai.error.ServiceUnavailableError (#1022)
Imho retries should be performed for ServiceUnavailableError (which
tends to happen to me quite often).
1 year ago
Harrison Chase 0f0e69adce
agent refactors (#997) 1 year ago
Harrison Chase 7fb33fca47
chroma docs (#1012) 1 year ago
Harrison Chase 0c553d2064
Harrion/kg (#1016)
Co-authored-by: William FH <13333726+hinthornw@users.noreply.github.com>
1 year ago
Anton Troynikov 78abd277ff
Chroma in LangChain (#1010)
Chroma is a simple to use, open-source, zero-config, zero setup
vectorstore.

Simply `pip install chromadb`, and you're good to go. 

Out-of-the-box Chroma is suitable for most LangChain workloads, but is
highly flexible. I tested to 1M embs on my M1 mac, with out issues and
reasonably fast query times.

Look out for future releases as we integrate more Chroma features with
LangChain!
1 year ago
Harrison Chase 0998577dfe
Harrison/unstructured structured (#1004) 1 year ago
Harrison Chase bbb06ca4cf
pdfminer (#1003) 1 year ago
Harrison Chase 10e7297306
Harrison/fake llm (#990)
Co-authored-by: Stefan Keselj <skeselj@princeton.edu>
Co-authored-by: Harrison Chase <harrisonchase@Harrisons-MBP.attlocal.net>
1 year ago
Harrison Chase e51fad1488
Harrison/0083 (#996)
Co-authored-by: Harrison Chase <harrisonchase@Harrisons-MBP.attlocal.net>
1 year ago
Shahriar Tajbakhsh b7747017d7
Import of `declarative_base` when SQLAlchemy <1.4 (#883)
In
[pyproject.toml](https://github.com/hwchase17/langchain/blob/master/pyproject.toml),
the expectation is `SQLAlchemy = "^1"`. But, the way `declarative_base`
is imported in
[cache.py](https://github.com/hwchase17/langchain/blob/master/langchain/cache.py)
will only work with SQLAlchemy >=1.4. This PR makes sure Langchain can
be run in environments with SQLAlchemy <1.4
1 year ago
Harrison Chase 2e96704d59
Harrison/airbyte (#989)
Co-authored-by: zanderchase <zanderchase@gmail.com>
Co-authored-by: Harrison Chase <harrisonchase@Harrisons-MacBook-Pro.local>
1 year ago
zanderchase c2d1d903fa
Zander/online pdf loader (#984) 1 year ago
Matt Robinson 07a407d89a
feat: adds `UnstructuredURLLoader` for loading data from urls (#979)
### Summary

Adds a `UnstructuredURLLoader` that supports loading data from a list of
URLs.


### Testing

```python
from langchain.document_loaders import UnstructuredURLLoader

urls = [
    "https://www.understandingwar.org/backgrounder/russian-offensive-campaign-assessment-february-8-2023",
    "https://www.understandingwar.org/backgrounder/russian-offensive-campaign-assessment-february-9-2023"
]
loader = UnstructuredURLLoader(urls=urls)
raw_documents = loader.load()
```
1 year ago
Harrison Chase c64f98e2bb
Harrison/format agent instructions (#973)
Co-authored-by: Andrew White <white.d.andrew@gmail.com>
Co-authored-by: Harrison Chase <harrisonchase@Harrisons-MBP.attlocal.net>
Co-authored-by: Peng Qu <82029664+pengqu123@users.noreply.github.com>
1 year ago
Harrison Chase 5469d898a9
Harrison/everynote (#974)
Co-authored-by: Harrison Chase <harrisonchase@Harrisons-MBP.attlocal.net>
1 year ago
Harrison Chase 3d639d1539
update lint (#975)
Co-authored-by: Harrison Chase <harrisonchase@Harrisons-MBP.attlocal.net>
1 year ago
Harrison Chase 91c6cea227
Harrison/batch embeds (#972)
Co-authored-by: John Dagdelen <jdagdelen@users.noreply.github.com>
Co-authored-by: Harrison Chase <harrisonchase@Harrisons-MBP.attlocal.net>
1 year ago
Harrison Chase ba54d36787
Harrison/tiktoken spec (#964)
Co-authored-by: James Briggs <35938317+jamescalam@users.noreply.github.com>
Co-authored-by: Harrison Chase <harrisonchase@Harrisons-MBP.attlocal.net>
1 year ago
Kevin Huo 512c523368
remove sample_row_in_table_info and simplify set operations in SQLDB (#932)
-Address TODO: deprecate for sample_row_in_table_info
-Simplify set operations by casting to sets to not need multiple set
casts + .difference() calls
1 year ago
Harrison Chase 01fa2d8117
Harrison/youtube fixes (#955)
Co-authored-by: Ji <jizhang.work@gmail.com>
Co-authored-by: Harrison Chase <harrisonchase@Harrisons-MBP.attlocal.net>
1 year ago
zanderchase 8e126bc9bd
adding webpage loading logic (#942) 1 year ago
Usama Navid e85c53ce68
Update readthedocs.py (#943)
Sometimes, the docs may be empty. For example for the text =
soup.find_all("main", {"id": "main-content"}) was an empty list. To
cater to these edge cases, the clean function needs to be checked if it
is empty or not.
1 year ago
Harrison Chase 3e1901e1aa
gutenberg books (#946)
Co-authored-by: zanderchase <zander@unfold.ag>
Co-authored-by: Harrison Chase <harrisonchase@Harrisons-MBP.attlocal.net>
1 year ago
Harrison Chase 44ecec3896
Harrison/add roam loader (#939) 1 year ago
Ankush Gola bc7e56e8df
Add asyncio support for LLM (OpenAI), Chain (LLMChain, LLMMathChain), and Agent (#841)
Supporting asyncio in langchain primitives allows for users to run them
concurrently and creates more seamless integration with
asyncio-supported frameworks (FastAPI, etc.)

Summary of changes:

**LLM**
* Add `agenerate` and `_agenerate`
* Implement in OpenAI by leveraging `client.Completions.acreate`

**Chain**
* Add `arun`, `acall`, `_acall`
* Implement them in `LLMChain` and `LLMMathChain` for now

**Agent**
* Refactor and leverage async chain and llm methods
* Add ability for `Tools` to contain async coroutine
* Implement async SerpaPI `arun`

Create demo notebook.

Open questions:
* Should all the async stuff go in separate classes? I've seen both
patterns (keeping the same class and having async and sync methods vs.
having class separation)
1 year ago
Harrison Chase bc53c928fc
Harrison/athropic (#921)
Co-authored-by: Mike Lambert <mlambert@gmail.com>
Co-authored-by: mrbean <sam@you.com>
Co-authored-by: mrbean <43734688+sam-h-bean@users.noreply.github.com>
Co-authored-by: Ivan Vendrov <ivendrov@gmail.com>
1 year ago
Harrison Chase 637c0d6508
Harrison/obsidian (#920) 1 year ago
Harrison Chase 1e56879d38
Harrison/save faiss (#916)
Co-authored-by: Shrey Joshi <shreyjoshi2004@gmail.com>
1 year ago
Ankush Gola 6bd1529cb7
add GoogleDriveLoader (#914)
only deal with docs files for now
1 year ago
Harrison Chase 2584663e44
remove unused buffer (#919) 1 year ago
Harrison Chase 87fad8fc00
analyze document (#731)
add analyze document chain, which does text splitting and then analysis
1 year ago
Harrison Chase e2b834e427
Harrison/prompt template prefix (#888)
Co-authored-by: Gabriel Simmons <simmons.gabe@gmail.com>
1 year ago
Harrison Chase f95cedc443
Harrison/sql rows (#915)
Co-authored-by: Jon Luo <20971593+jzluo@users.noreply.github.com>
1 year ago
Harrison Chase ba5a2f06b9
Harrison/inference endpoint (#861)
Co-authored-by: Eno Reyes <enoreyes@gmail.com>
1 year ago
Harrison Chase 2ec25ddd4c
add unstructured examples (#913) 1 year ago
Harrison Chase 93a091cfb8
Optionally return shell output on incorrect command (#894) (#899)
This allows the LLM to correct its previous command by looking at the
error message output to the shell.

Additionally, this uses subprocess.run because that is now recommended
over subprocess.check_output:

https://docs.python.org/3/library/subprocess.html#using-the-subprocess-module

Co-authored-by: Amos Ng <me@amos.ng>
1 year ago
James Briggs 3aa53b44dd
added i_end in batch extraction (#907)
Fix for issue #906 

Switches `[i : i + batch_size]` to `[i : i_end]` in Pinecone
`from_texts` method
1 year ago
Harrison Chase 53d56d7650
Harrison/unstructured support (#903) 1 year ago
Harrison Chase 2a68be3e8d
chat vector db chain (#902) 1 year ago
Bagatur 7658263bfb
Check type of LLM.generate `prompts` arg (#886)
Was passing prompt in directly as string and getting nonsense outputs.
Had to inspect source code to realize that first arg should be a list.
Could be nice if there was an explicit error or warning, seems like this
could be a common mistake.
1 year ago
Samantha Whitmore 32b11101d3
Get elements of ActionInput on newlines (#889)
The re.DOTALL flag in Python's re (regular expression) module makes the
. (dot) metacharacter match newline characters as well as any other
character.

Without re.DOTALL, the . metacharacter only matches any character except
for a newline character. With re.DOTALL, the . metacharacter matches any
character, including newline characters.
1 year ago
Harrison Chase 1614c5f5fd
fix flaky tests (#892) 1 year ago
Harrison Chase a2b699dcd2
prompt template from string (#884) 1 year ago
Zach Schillaci 4c79100b15
Correct prompt typo + update example for SQLDatabaseChain (#868)
See https://github.com/hwchase17/langchain/issues/821
1 year ago
Harrison Chase 777aaff841
fix routing to tiktoken encoder (#866) 1 year ago
Harrison Chase e9ef08862d
validate template (#865) 1 year ago
Harrison Chase 364b771743
sql return direct (#864) 1 year ago
Harrison Chase 483441d305
pass kwargs through to loading (#863) 1 year ago
Harrison Chase 8df6b68093
fix length based example selector (#862) 1 year ago
Harrison Chase 3f48eed5bd
Harrison/milvus (#856)
Signed-off-by: Filip Haltmayer <filip.haltmayer@zilliz.com>
Signed-off-by: Frank Liu <frank.liu@zilliz.com>
Co-authored-by: Filip Haltmayer <81822489+filip-halt@users.noreply.github.com>
Co-authored-by: Frank Liu <frank@frankzliu.com>
1 year ago
Ankush Gola 933441cc52
Add retry to OpenAI llm (#849)
add ability to retry when certain exceptions are raised by
`openai.Completions.create`

Test plan: ran all OpenAI integration tests.
1 year ago
kahkeng 4a8f5cdf4b
Add alternative token-based text splitter (#816)
This does not involve a separator, and will naively chunk input text at
the appropriate boundaries in token space.

This is helpful if we have strict token length limits that we need to
strictly follow the specified chunk size, and we can't use aggressive
separators like spaces to guarantee the absence of long strings.

CharacterTextSplitter will let these strings through without splitting
them, which could cause overflow errors downstream.

Splitting at arbitrary token boundaries is not ideal but is hopefully
mitigated by having a decent overlap quantity. Also this results in
chunks which has exact number of tokens desired, instead of sometimes
overcounting if we concatenate shorter strings.

Potentially also helps with #528.
1 year ago
Harrison Chase 23d5f64bda
Harrison/ngram example (#846)
Co-authored-by: Sean Spriggens <ssprigge@syr.edu>
1 year ago
Harrison Chase 0de55048b7
return code for pal (#844) 1 year ago
Harrison Chase d564308e0f
rfc: instruct embeddings (#811)
Co-authored-by: seanaedmiston <seane999@gmail.com>
1 year ago
Nick Furlotte 576609e665
Update PAL to allow passing local and global context to PythonREPL (#774)
Passing additional variables to the python environment can be useful for
example if you want to generate code to analyze a dataset.

I also added a tracker for the executed code - `code_history`.
1 year ago
Harrison Chase 3f952eb597
add from string method (#820) 1 year ago
Ikko Eltociear Ashimine ba26a879e0
Fix typo in crawler.py (#842)
seperator -> separator
1 year ago
Jonas Ehrenstein f3508228df
Minor fix for google search util: it's uncertain if "snippet" in results exists (#830)
The results from Google search may not always contain a "snippet". 

Example:
`{'kind': 'customsearch#result', 'title': 'FEMA Flood Map', 'htmlTitle':
'FEMA Flood Map', 'link': 'https://msc.fema.gov/portal/home',
'displayLink': 'msc.fema.gov', 'formattedUrl':
'https://msc.fema.gov/portal/home', 'htmlFormattedUrl':
'https://<b>msc</b>.fema.gov/portal/home'}`

This will cause a KeyError at line 99
`snippets.append(result["snippet"])`.
1 year ago
Zach Schillaci b4eb043b81
Minor fix to SQLDatabaseChain doc (#826) 1 year ago
Raza Habib 9f8e05ffd4
Update __init__.py (#827)
Remove duplicate APIChain
1 year ago
Johanna Appel ebea40ce86
Add 'truncate' parameter for CohereEmbeddings (#798)
Currently, the 'truncate' parameter of the cohere API is not supported.

This means that by default, if trying to generate and embedding that is
too big, the call will just fail with an error (which is frustrating if
using this embedding source e.g. with GPT-Index, because it's hard to
handle it properly when generating a lot of embeddings).
With the parameter, one can decide to either truncate the START or END
of the text to fit the max token length and still generate an embedding
without throwing the error.

In this PR, I added this parameter to the class.

_Arguably, there should be a better way to handle this error, e.g. by
optionally calling a function or so that gets triggered when the token
limit is reached and can split the document or some such. Especially in
the use case with GPT-Index, its often hard to estimate the token counts
for each document and I'd rather sort out the troublemakers or simply
split them than interrupting the whole execution.
Thoughts?_

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
1 year ago
Harrison Chase 7b4882a2f4
Harrison/tf embeddings (#817)
Co-authored-by: Ryohei Kuroki <10434946+yakigac@users.noreply.github.com>
1 year ago
Harrison Chase 5d4b6e4d4e
conversational agent fix (#818) 1 year ago
Harrison Chase 94ae126747
return sql intermediate steps (#792) 1 year ago
bair82 ae5695ad32
Update cohere.py (#795)
When stop tokens are set in Cohere LLM constructor, they are currently
not stripped from the response, and they should be stripped
1 year ago
Johanna Appel cacf4091c0
Fix documentation for 'model' parameter in CohereEmbeddings (#797)
Currently, the class parameter 'model_name' of the CohereEmbeddings
class is not supported, but 'model' is. The class documentation is
inconsistent with this, though, so I propose to either fix the
documentation (this PR right now) or fix the parameter.

It will create the following error:
```
ValidationError: 1 validation error for CohereEmbeddings
model_name
  extra fields not permitted (type=value_error.extra)
```
1 year ago
Jason Liu 54f9e4287f
Pass kwargs from initialize_agent into agent classmethod (#799)
# Problem
I noticed that in order to change the prefix of the prompt in the
`zero-shot-react-description` agent
we had to dig around to subset strings deep into the agent's attributes.
It requires the user to inspect a long chain of attributes and classes.

`initialize_agent -> AgentExecutor -> Agent -> LLMChain -> Prompt from
Agent.create_prompt`

``` python
agent = initialize_agent(
    tools=tools,
    llm=fake_llm,
    agent="zero-shot-react-description"
)
prompt_str = agent.agent.llm_chain.prompt.template
new_prompt_str = change_prefix(prompt_str)
agent.agent.llm_chain.prompt.template = new_prompt_str
```

# Implemented Solution

`initialize_agent` accepts `**kwargs` but passes it to `AgentExecutor`
but not `ZeroShotAgent`, by simply giving the kwargs to the agent class
methods we can support changing the prefix and suffix for one agent
while allowing future agents to take advantage of `initialize_agent`.


```
agent = initialize_agent(
    tools=tools,
    llm=fake_llm,
    agent="zero-shot-react-description",
    agent_kwargs={"prefix": prefix, "suffix": suffix}
)
```

To be fair, this was before finding docs around custom agents here:
https://langchain.readthedocs.io/en/latest/modules/agents/examples/custom_agent.html?highlight=custom%20#custom-llmchain
but i find that my use case just needed to change the prefix a little.


# Changes

* Pass kwargs to Agent class method
* Added a test to check suffix and prefix

---------

Co-authored-by: Jason Liu <jason@jxnl.coA>
1 year ago
Roy Williams 6086292252
Centralize logic for loading from LangChainHub, add ability to pin dependencies (#805)
It's generally considered to be a good practice to pin dependencies to
prevent surprise breakages when a new version of a dependency is
released. This commit adds the ability to pin dependencies when loading
from LangChainHub.

Centralizing this logic and using urllib fixes an issue identified by
some windows users highlighted in this video -
https://youtu.be/aJ6IQUh8MLQ?t=537
1 year ago
Harrison Chase b3916f74a7
enable mmr search (#807) 1 year ago
Harrison Chase f46f1d28af
expose memory key name (#808) 1 year ago
Harrison Chase 1ad7973cc6
Harrison/tool decorator (#790)
Co-authored-by: Jason Liu <jxnl@users.noreply.github.com>
Co-authored-by: Jason Liu <jason@jxnl.coA>
1 year ago
Harrison Chase 5f73d06502
Harrison/fix caching bug (#788)
Co-authored-by: thepok <richterthepok@yahoo.de>
1 year ago
Harrison Chase 248c297f1b
Sample row in table info for SQLDatabase (#769) (#782)
The agents usually benefit from understanding what the data looks like
to be able to filter effectively. Sending just one row in the table info
allows the agent to understand the data before querying and get better
results.

---------

Co-authored-by: Francisco Ingham <>

---------

Co-authored-by: Francisco Ingham <fpingham@gmail.com>
1 year ago
Francisco Ingham 213c2e33e5
Sql prompt improvement (#787)
Co-authored-by: Francisco Ingham <>
1 year ago
Harrison Chase 2e0219cac0
fixing bash util (#779) 1 year ago
Harrison Chase 966611bbfa
add model kwargs to handle stop token from cohere (#773) 1 year ago
Harrison Chase 7198a1cb22
Harrison/refactor agent (#781)
Co-authored-by: Amos Ng <me@amos.ng>
1 year ago
Harrison Chase 5bb2952860
Harrison/hf pipeline (#780)
Co-authored-by: Parth Chadha <parth29@gmail.com>
1 year ago
Harrison Chase c658f0aed3
Harrison/add to search (#778)
Co-authored-by: Enrico Shippole <enricoship@gmail.com>
1 year ago
Bill Kish 309d86e339
increase text-davinci-003 contextsize to 4097 (#748)
text-davinci-003 supports a context size of 4097 tokens so return 4097
instead of 4000 in modelname_to_contextsize() for text-davinci-003

Co-authored-by: Bill Kish <bill@cogniac.co>
1 year ago
Albert Ziegler 5198d6f541
Add missing verb (#768)
Mini drive-by PR:

I came across this sentence in a stack trace for an error I had, and it
confused me because the verb I missing. So I added the verb.
1 year ago
Harrison Chase a5d003f0c9
update notebook and make backwards compatible (#772) 1 year ago
Harrison Chase 924b7ecf89
pass kwargs and bump (#770) 1 year ago
Samantha Whitmore be7de427ca
Serialize all the chains! (#761)
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
1 year ago
Harrison Chase e2a7fed890
Harrison/serialize from llm and tools (#760) 1 year ago
Harrison Chase 12dc7f26cc
load agents from hub (#759) 1 year ago
Harrison Chase 7129f23511
output parser serialization (#758) 1 year ago
Harrison Chase f273c50d62
add loading chains from hub (#757) 1 year ago
Harrison Chase 1b89a438cf
(wip) Harrison/serialize agents (#725) 1 year ago
Harrison Chase cc70565886
add prompt type (#730) 1 year ago
Francisco Ingham 374e510f94
Upper bound on number of iterations (#754)
Some custom agents might continue to iterate until they find the correct
answer, getting stuck on loops that generate request after request and
are really expensive for the end user. Putting an upper bound for the
number of iterations
by default controls this and can be explicitly tweaked by the user if
necessary.

Co-authored-by: Francisco Ingham <>
1 year ago
Smit Shah 28efbb05bf
Add params to reduce K dynamically to reduce it below token limit (#739)
Referring to #687, I implemented the functionality to reduce K if it
exceeds the token limit.

Edit: I should have ran make lint locally. Also, this only applies to
`StuffDocumentChain`
1 year ago
Roy Williams d2f882158f
Add type information for crawler.py (#738)
Added type information to `crawler.py` to make it safer to use and
understand.
1 year ago
Ankush Gola 57609845df
add tracing support to langchain (#741)
* add implementations of `BaseCallbackHandler` to support tracing:
`SharedTracer` which is thread-safe and `Tracer` which is not and is
meant to be used locally.
* Tracers persist runs to locally running `langchain-server`

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
1 year ago
Harrison Chase 7f76a1189c
bump version to 0.0.70 (#744) 1 year ago
Harrison Chase 2ba1128095
Harrison/backwards compat (#740) 1 year ago
Francisco Ingham f9ddcb5705
Hotfix: distance_func and collection_name must not be in kwargs (#735)
If `distance_func` and `collection_name` are in `kwargs` they are sent
to the `QdrantClient` which results in an error being raised.

Co-authored-by: Francisco Ingham <>
1 year ago
Amos Ng fa6826e417
Fix sqlalchemy warnings when running tests (#733)
This has been bugging me when running my own tests that call langchain
methods :P
1 year ago
Harrison Chase 9194a8be89
add stop to stream (#729) 1 year ago
scadEfUr e3df8ab6dc
move hyde into chains (#728)
Co-authored-by: scadEfUr <>
1 year ago
Harrison Chase 0ffeabd14f
Harrison/serialize llm chain (#671) 1 year ago
Feynman Liang 2824f36401
Add namespace to Pinecone.from_index (#716)
Resolves https://github.com/hwchase17/langchain/issues/718
1 year ago
Kacper Łukawski d4f719c34b
Convert numpy arrays to lists in HuggingFaceEmbeddings (#714)
`SentenceTransformer` returns a NumPy array, not a `List[List[float]]`
or `List[float]` as specified in the interface of `Embeddings`. That PR
makes it consistent with the interface.
1 year ago
Kacper Łukawski 97c3544a1e
Hotfix: Qdrant.from_text embeddings (#713)
I'm providing a hotfix for Qdrant integration. Calculating a single
embedding to obtain the vector size was great idea. However, that change
introduced a bug trying to put only that single embedding into the
database. It's fixed. Right now all the embeddings will be pushed to
Qdrant.
1 year ago
Feynman Liang 3a38604f07
Fix typo (#705) 1 year ago
Harrison Chase fc4ad2db0f
langchain hub docs (#704)
Co-authored-by: scadEfUr <123224380+scadEfUr@users.noreply.github.com>
1 year ago
Scott Leibrand 34932dd211
remove legacy embedding model name (#703)
Now that OpenAI has deprecated all embeddings models except
text-embedding-ada-002, we should stop specifying a legacy embedding
model in the example. This will also avoid confusion from people (like
me) trying to specify model="text-embedding-ada-002" and having that
erroneously expanded to text-search-text-embedding-ada-002-query-001
1 year ago
scadEfUr 4aba0abeaa
added common prompt load method (#699)
Co-authored-by: scadEfUr
1 year ago
xloem 36b6b3cdf6
HuggingFacePipeline: Forward model_kwargs. (#696)
Since the tokenizer and model are constructed manually, model_kwargs
needs to
be passed to their constructors. Additionally, the pipeline has a
specific
named parameter to pass these with, which can provide forward
compatibility if
they are used for something other than tokenizer or model construction.
1 year ago
Harrison Chase 3a30e6daa8
Harrison/openai callback (#684) 1 year ago
Harrison Chase aef82f5d59
fix whitespace for conversational agent (#690) 1 year ago
Harrison Chase 86dbdb118b
Harrison/serpapi extra tools (#691)
Co-authored-by: Bruno Bornsztein <bruno.bornsztein@gmail.com>
1 year ago
Harrison Chase cbc146720b
verbose flag (#683) 1 year ago
Harrison Chase 27cef0870d
bump version to 0.0.67 (#689) 1 year ago
Samantha Whitmore 77e3d58922
ConversationEntityMemory: Chain which uses an entity extraction & sum… (#678)
…marization prompt to maintain a key-value store of memory information

cc @devennavani

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
1 year ago
dham e04b063ff4
add faiss local saving/loading (#676)
- This uses the faiss built-in `write_index` and `load_index` to save
and load faiss indexes locally
- Also fixes #674
- The save/load functions also use the faiss library, so I refactored
the dependency into a function
1 year ago
Harrison Chase e45f7e40e8
Harrison/few shot yaml (#682)
Co-authored-by: vintro <77507980+vintrocode@users.noreply.github.com>
1 year ago
Harrison Chase a2eeaf3d43
strip whitespace (#680) 1 year ago
Harrison Chase 3d41af0aba
Harrison/load tools kwargs (#681)
Co-authored-by: Bruno Bornsztein <bruno.bornsztein@gmail.com>
1 year ago
Harrison Chase 0b204d8c21
Harrison/quadrant (#665)
Co-authored-by: Kacper Łukawski <kacperlukawski@users.noreply.github.com>
1 year ago
Harrison Chase 983b73f47c
add search kwargs (#664) 1 year ago
vertinski 65f3a341b0
Prompt fix for empty intermediate steps in summarization (#660)
Adding quotation marks around {text} avoids generating empty or
completely random responses from OpenAI davinci-003. Empty or completely
unrelated intermediate responses in summarization messes up the final
result or makes it very inaccurate.
The error from OpenAI would be: "The model predicted a completion that
begins with a stop sequence, resulting in no output. Consider adjusting
your prompt or stop sequences."
This fix corrects the prompting for summarization chain. This works on
API too, the images are for demonstrative purposes.
This approach can be applied to other similar prompts too. 

Examples:

1) Without quotation marks
![Screenshot from 2023-01-20
07-18-19](https://user-images.githubusercontent.com/22897470/213624365-9dfc18f9-5f3f-45d2-abe1-56de67397e22.png)

2) With quotation marks
![Screenshot from 2023-01-20
07-18-35](https://user-images.githubusercontent.com/22897470/213624478-c958e742-a4a7-46fe-a163-eca6326d9dae.png)
1 year ago
iocuydi 69998b5fad
Add ids parameter for pinecone from_texts / add_texts (#659)
Allow optionally specifying a list of ids for pinecone rather than
having them randomly generated.
This also permits editing the embedding/metadata of existing pinecone
entries, by id.
1 year ago
Harrison Chase 54d7f1c933
fix caching (#658) 1 year ago
Harrison Chase d0fdc6da11
Harrison/bing wrapper (#656)
Co-authored-by: Enrico Shippole <henryshippole@gmail.com>
1 year ago
iocuydi 207e319a70
Add search_kwargs option for VectorDBQAWithSourcesChain (#657)
Allows for passing additional vectorstore params like namespace, etc. to
VectorDBQAWithSourcesChain

Example:
`chain = VectorDBQAWithSourcesChain.from_llm(OpenAI(temperature=0),
vectorstore=store, search_kwargs={"namespace": namespace})`
1 year ago
Harrison Chase 052c361031
pinecone docstring (#654) 1 year ago
Harrison Chase 95720adff5
Add documentation for custom prompts for Agents (#631) (#640)
- Added a comment interpreting regex for `ZeroShotAgent`
- Added a note to the `Custom Agent` notebook

Co-authored-by: Sam Ching <samuel@duolingo.com>
1 year ago
Harrison Chase 6be5f4e4c4
Harrison/sql db chain (#641)
Co-authored-by: Bruno Bornsztein <bruno.bornsztein@gmail.com>
1 year ago
Harrison Chase 4d4cff0530
Harrison/cohere experimental (#638)
Co-authored-by: inyourhead <44607279+xettrisomeman@users.noreply.github.com>
1 year ago
Sasmitha Manathunga 5c97f70bf1
Fix CohereError: embed is not an available endpoint on this model (#637)
Running the Cohere embeddings example from the docs:

```python
from langchain.embeddings import CohereEmbeddings
embeddings = CohereEmbeddings(cohere_api_key= cohere_api_key)

text = "This is a test document."
query_result = embeddings.embed_query(text)
doc_result = embeddings.embed_documents([text])
```

I get the error:

```bash
CohereError(message=res['message'], http_status=response.status_code, headers=response.headers)      
cohere.error.CohereError: embed is not an available endpoint on this model
```

This is because the `model` string is set to `medium` which is not
currently available.

From the Cohere docs:

> Currently available models are small and large (default)
1 year ago
Francisco Ingham b929fd9f59
Exclude reference to 'example' in api prompt (#629)
Co-authored-by: lesscomfortable <pancho_ingham@hotmail.com>
1 year ago
Harrison Chase 3d43906572
Harrison/new api chain (#623)
Co-authored-by: Francisco Ingham <fpingham@gmail.com>
Co-authored-by: lesscomfortable <pancho_ingham@hotmail.com>
1 year ago
Harrison Chase 1c71fadfdc
more complex sql chain (#619)
add a more complex sql chain that first subsets the necessary tables
1 year ago
Harrison Chase 49b3d6c78c
Harrison/wiki update (#622)
Co-authored-by: Rubens Mau <rubensmau@gmail.com>
1 year ago
Harrison Chase 1ac3319e45
simplify parsing of the final answer (#621) 1 year ago
Harrison Chase 2a54e73fec
bump version to 0063 (#616) 1 year ago
Nicolas 91d7fd20ae
feat: add custom prompt for QAEvalChain chain (#610)
I originally had only modified the `from_llm` to include the prompt but
I realized that if the prompt keys used on the custom prompt didn't
match the default prompt, it wouldn't work because of how `apply` works.

So I made some changes to the evaluate method to check if the prompt is
the default and if not, it will check if the input keys are the same as
the prompt key and update the inputs appropriately.

Let me know if there is a better way to do this.

Also added the custom prompt to the QA eval notebook.
1 year ago
Francisco Ingham 1787c473b8
Custom prompt option for llm_bash and api chains (#612)
Co-authored-by: lesscomfortable <pancho_ingham@hotmail.com>
1 year ago
Harrison Chase 67808bad0e
expose more serpapi parameters (#609) 1 year ago
Harrison Chase 9f9afbb6a8
add custom prompt for LLMMathChain and SQLDatabase chain (#605) 1 year ago
Smit Shah a87a2aacaa
[Minor Fix] Fix spacy TextSplitter init (#606) 1 year ago
babbldev b5eb91536a
Added filter argument to pinecone queries, fixes #600 (#601)
Added filter argument to similarity_search() and
similarity_search_with_score()

Co-authored-by: Sam Cartford (MBP) <cartford@hey.com>
1 year ago
Harrison Chase d574bf0a27
add documentation on how to load different chain types (#595) 1 year ago