Commit Graph

575 Commits

Author SHA1 Message Date
jeff
6a4f602156
docs: fix spelling typo (#934) 2023-02-08 11:13:35 -08:00
Ikko Eltociear Ashimine
6023d5be09
Update huggingface_hub.ipynb (#944)
HuggingFace -> Hugging Face
2023-02-08 11:05:28 -08:00
Harrison Chase
a306baacd1
bump version to 0080 (#941) 2023-02-08 07:41:25 -08:00
Harrison Chase
44ecec3896
Harrison/add roam loader (#939) 2023-02-08 00:35:33 -08:00
Ankush Gola
bc7e56e8df
Add asyncio support for LLM (OpenAI), Chain (LLMChain, LLMMathChain), and Agent (#841)
Supporting asyncio in langchain primitives allows for users to run them
concurrently and creates more seamless integration with
asyncio-supported frameworks (FastAPI, etc.)

Summary of changes:

**LLM**
* Add `agenerate` and `_agenerate`
* Implement in OpenAI by leveraging `client.Completions.acreate`

**Chain**
* Add `arun`, `acall`, `_acall`
* Implement them in `LLMChain` and `LLMMathChain` for now

**Agent**
* Refactor and leverage async chain and llm methods
* Add ability for `Tools` to contain async coroutine
* Implement async SerpaPI `arun`

Create demo notebook.

Open questions:
* Should all the async stuff go in separate classes? I've seen both
patterns (keeping the same class and having async and sync methods vs.
having class separation)
2023-02-07 21:21:57 -08:00
Vincent Elster
afc7f1b892
Fix typos (#929)
accomplisehd -> accomplished
2023-02-07 14:39:45 -08:00
Harrison Chase
d43250bfa5
Harrison/ver0079 (#927) 2023-02-07 07:59:35 -08:00
Harrison Chase
bc53c928fc
Harrison/athropic (#921)
Co-authored-by: Mike Lambert <mlambert@gmail.com>
Co-authored-by: mrbean <sam@you.com>
Co-authored-by: mrbean <43734688+sam-h-bean@users.noreply.github.com>
Co-authored-by: Ivan Vendrov <ivendrov@gmail.com>
2023-02-06 22:29:25 -08:00
Harrison Chase
637c0d6508
Harrison/obsidian (#920) 2023-02-06 22:21:16 -08:00
Harrison Chase
1e56879d38
Harrison/save faiss (#916)
Co-authored-by: Shrey Joshi <shreyjoshi2004@gmail.com>
2023-02-06 21:44:50 -08:00
Ankush Gola
6bd1529cb7
add GoogleDriveLoader (#914)
only deal with docs files for now
2023-02-06 21:44:35 -08:00
Harrison Chase
2584663e44
remove unused buffer (#919) 2023-02-06 20:31:30 -08:00
Harrison Chase
cc20b9425e
add reqs (#918) 2023-02-06 20:30:03 -08:00
Harrison Chase
cea380174f
fix docs custom prompt template (#917) 2023-02-06 20:29:48 -08:00
Harrison Chase
87fad8fc00
analyze document (#731)
add analyze document chain, which does text splitting and then analysis
2023-02-06 20:02:19 -08:00
Harrison Chase
e2b834e427
Harrison/prompt template prefix (#888)
Co-authored-by: Gabriel Simmons <simmons.gabe@gmail.com>
2023-02-06 19:09:28 -08:00
Harrison Chase
f95cedc443
Harrison/sql rows (#915)
Co-authored-by: Jon Luo <20971593+jzluo@users.noreply.github.com>
2023-02-06 18:56:18 -08:00
Harrison Chase
ba5a2f06b9
Harrison/inference endpoint (#861)
Co-authored-by: Eno Reyes <enoreyes@gmail.com>
2023-02-06 18:14:25 -08:00
Harrison Chase
2ec25ddd4c
add unstructured examples (#913) 2023-02-06 18:13:46 -08:00
Kevin Huo
31b054f69d
Add pinecone integration test (#911)
Basic integration test for pinecone
2023-02-06 18:13:35 -08:00
Harrison Chase
93a091cfb8
Optionally return shell output on incorrect command (#894) (#899)
This allows the LLM to correct its previous command by looking at the
error message output to the shell.

Additionally, this uses subprocess.run because that is now recommended
over subprocess.check_output:

https://docs.python.org/3/library/subprocess.html#using-the-subprocess-module

Co-authored-by: Amos Ng <me@amos.ng>
2023-02-06 12:46:16 -08:00
James Briggs
3aa53b44dd
added i_end in batch extraction (#907)
Fix for issue #906 

Switches `[i : i + batch_size]` to `[i : i_end]` in Pinecone
`from_texts` method
2023-02-06 12:45:56 -08:00
Harrison Chase
82c080c6e6
bump version to 0078 (#908) 2023-02-06 00:32:44 -08:00
Harrison Chase
71e662e88d
update docs (#905) 2023-02-06 00:26:20 -08:00
Harrison Chase
53d56d7650
Harrison/unstructured support (#903) 2023-02-05 23:02:07 -08:00
Harrison Chase
2a68be3e8d
chat vector db chain (#902) 2023-02-05 21:38:47 -08:00
James Briggs
8217a2f26c
Update pinecone init details in docs (#898)
PR to fix outdated environment details in the docs, see issue #897 

I added code comments as pointers to where users go to get API keys, and
where they can find the relevant environment variable.
2023-02-05 15:21:56 -08:00
Bagatur
7658263bfb
Check type of LLM.generate prompts arg (#886)
Was passing prompt in directly as string and getting nonsense outputs.
Had to inspect source code to realize that first arg should be a list.
Could be nice if there was an explicit error or warning, seems like this
could be a common mistake.
2023-02-04 22:49:17 -08:00
Samantha Whitmore
32b11101d3
Get elements of ActionInput on newlines (#889)
The re.DOTALL flag in Python's re (regular expression) module makes the
. (dot) metacharacter match newline characters as well as any other
character.

Without re.DOTALL, the . metacharacter only matches any character except
for a newline character. With re.DOTALL, the . metacharacter matches any
character, including newline characters.
2023-02-04 20:42:25 -08:00
Harrison Chase
1614c5f5fd
fix flaky tests (#892) 2023-02-04 20:41:33 -08:00
Harrison Chase
a2b699dcd2
prompt template from string (#884) 2023-02-04 17:04:58 -08:00
Alex
7cc44b3bdb
Add to gallery (#882) 2023-02-04 09:45:20 -08:00
Harrison Chase
0b9f086d36
Harrison/docs splitter (#879) 2023-02-03 15:09:13 -08:00
Harrison Chase
bcfbc7a818
version 0077 (#878) 2023-02-03 14:49:52 -08:00
Ryan Walker
1dd0733515
Fix small typo in getting started docs (#876)
Just noticed this little typo while reading the docs, thought I'd open a
PR!
2023-02-03 14:22:12 -08:00
Zach Schillaci
4c79100b15
Correct prompt typo + update example for SQLDatabaseChain (#868)
See https://github.com/hwchase17/langchain/issues/821
2023-02-03 08:34:41 -08:00
Harrison Chase
777aaff841
fix routing to tiktoken encoder (#866) 2023-02-02 22:08:14 -08:00
Harrison Chase
e9ef08862d
validate template (#865) 2023-02-02 22:08:01 -08:00
Harrison Chase
364b771743
sql return direct (#864) 2023-02-02 22:07:41 -08:00
Harrison Chase
483441d305
pass kwargs through to loading (#863) 2023-02-02 22:07:26 -08:00
Harrison Chase
8df6b68093
fix length based example selector (#862) 2023-02-02 22:06:56 -08:00
Harrison Chase
3f48eed5bd
Harrison/milvus (#856)
Signed-off-by: Filip Haltmayer <filip.haltmayer@zilliz.com>
Signed-off-by: Frank Liu <frank.liu@zilliz.com>
Co-authored-by: Filip Haltmayer <81822489+filip-halt@users.noreply.github.com>
Co-authored-by: Frank Liu <frank@frankzliu.com>
2023-02-02 22:05:47 -08:00
Ankush Gola
933441cc52
Add retry to OpenAI llm (#849)
add ability to retry when certain exceptions are raised by
`openai.Completions.create`

Test plan: ran all OpenAI integration tests.
2023-02-02 19:56:26 -08:00
kahkeng
4a8f5cdf4b
Add alternative token-based text splitter (#816)
This does not involve a separator, and will naively chunk input text at
the appropriate boundaries in token space.

This is helpful if we have strict token length limits that we need to
strictly follow the specified chunk size, and we can't use aggressive
separators like spaces to guarantee the absence of long strings.

CharacterTextSplitter will let these strings through without splitting
them, which could cause overflow errors downstream.

Splitting at arbitrary token boundaries is not ideal but is hopefully
mitigated by having a decent overlap quantity. Also this results in
chunks which has exact number of tokens desired, instead of sometimes
overcounting if we concatenate shorter strings.

Potentially also helps with #528.
2023-02-02 19:55:13 -08:00
Harrison Chase
523ad2e6bd
vercel deployments (#850) 2023-02-02 19:54:09 -08:00
Harrison Chase
fc0cfd7d1f
docs (#848) 2023-02-02 11:35:36 -08:00
Harrison Chase
4d32441b86
bump version to 0076 (#847) 2023-02-02 10:05:39 -08:00
Harrison Chase
23d5f64bda
Harrison/ngram example (#846)
Co-authored-by: Sean Spriggens <ssprigge@syr.edu>
2023-02-02 09:44:42 -08:00
Harrison Chase
0de55048b7
return code for pal (#844) 2023-02-02 08:47:20 -08:00
Harrison Chase
d564308e0f
rfc: instruct embeddings (#811)
Co-authored-by: seanaedmiston <seane999@gmail.com>
2023-02-02 08:44:02 -08:00