Commit Graph

680 Commits (5ba1c7b6018837a2cbf61ae8e7f45c8f1e3e8f72)
 

Author SHA1 Message Date
Harrison Chase 01fa2d8117
Harrison/youtube fixes (#955)
Co-authored-by: Ji <jizhang.work@gmail.com>
Co-authored-by: Harrison Chase <harrisonchase@Harrisons-MBP.attlocal.net>
1 year ago
zanderchase 8e126bc9bd
adding webpage loading logic (#942) 1 year ago
Harrison Chase c71027e725
add docs for steamship deployment (#949)
Co-authored-by: Harrison Chase <harrisonchase@Harrisons-MBP.attlocal.net>
1 year ago
Usama Navid e85c53ce68
Update readthedocs.py (#943)
Sometimes, the docs may be empty. For example for the text =
soup.find_all("main", {"id": "main-content"}) was an empty list. To
cater to these edge cases, the clean function needs to be checked if it
is empty or not.
1 year ago
Harrison Chase 3e1901e1aa
gutenberg books (#946)
Co-authored-by: zanderchase <zander@unfold.ag>
Co-authored-by: Harrison Chase <harrisonchase@Harrisons-MBP.attlocal.net>
1 year ago
jeff 6a4f602156
docs: fix spelling typo (#934) 1 year ago
Ikko Eltociear Ashimine 6023d5be09
Update huggingface_hub.ipynb (#944)
HuggingFace -> Hugging Face
1 year ago
Harrison Chase a306baacd1
bump version to 0080 (#941) 1 year ago
Harrison Chase 44ecec3896
Harrison/add roam loader (#939) 1 year ago
Ankush Gola bc7e56e8df
Add asyncio support for LLM (OpenAI), Chain (LLMChain, LLMMathChain), and Agent (#841)
Supporting asyncio in langchain primitives allows for users to run them
concurrently and creates more seamless integration with
asyncio-supported frameworks (FastAPI, etc.)

Summary of changes:

**LLM**
* Add `agenerate` and `_agenerate`
* Implement in OpenAI by leveraging `client.Completions.acreate`

**Chain**
* Add `arun`, `acall`, `_acall`
* Implement them in `LLMChain` and `LLMMathChain` for now

**Agent**
* Refactor and leverage async chain and llm methods
* Add ability for `Tools` to contain async coroutine
* Implement async SerpaPI `arun`

Create demo notebook.

Open questions:
* Should all the async stuff go in separate classes? I've seen both
patterns (keeping the same class and having async and sync methods vs.
having class separation)
1 year ago
Vincent Elster afc7f1b892
Fix typos (#929)
accomplisehd -> accomplished
1 year ago
Harrison Chase d43250bfa5
Harrison/ver0079 (#927) 1 year ago
Harrison Chase bc53c928fc
Harrison/athropic (#921)
Co-authored-by: Mike Lambert <mlambert@gmail.com>
Co-authored-by: mrbean <sam@you.com>
Co-authored-by: mrbean <43734688+sam-h-bean@users.noreply.github.com>
Co-authored-by: Ivan Vendrov <ivendrov@gmail.com>
1 year ago
Harrison Chase 637c0d6508
Harrison/obsidian (#920) 1 year ago
Harrison Chase 1e56879d38
Harrison/save faiss (#916)
Co-authored-by: Shrey Joshi <shreyjoshi2004@gmail.com>
1 year ago
Ankush Gola 6bd1529cb7
add GoogleDriveLoader (#914)
only deal with docs files for now
1 year ago
Harrison Chase 2584663e44
remove unused buffer (#919) 1 year ago
Harrison Chase cc20b9425e
add reqs (#918) 1 year ago
Harrison Chase cea380174f
fix docs custom prompt template (#917) 1 year ago
Harrison Chase 87fad8fc00
analyze document (#731)
add analyze document chain, which does text splitting and then analysis
1 year ago
Harrison Chase e2b834e427
Harrison/prompt template prefix (#888)
Co-authored-by: Gabriel Simmons <simmons.gabe@gmail.com>
1 year ago
Harrison Chase f95cedc443
Harrison/sql rows (#915)
Co-authored-by: Jon Luo <20971593+jzluo@users.noreply.github.com>
1 year ago
Harrison Chase ba5a2f06b9
Harrison/inference endpoint (#861)
Co-authored-by: Eno Reyes <enoreyes@gmail.com>
1 year ago
Harrison Chase 2ec25ddd4c
add unstructured examples (#913) 1 year ago
Kevin Huo 31b054f69d
Add pinecone integration test (#911)
Basic integration test for pinecone
1 year ago
Harrison Chase 93a091cfb8
Optionally return shell output on incorrect command (#894) (#899)
This allows the LLM to correct its previous command by looking at the
error message output to the shell.

Additionally, this uses subprocess.run because that is now recommended
over subprocess.check_output:

https://docs.python.org/3/library/subprocess.html#using-the-subprocess-module

Co-authored-by: Amos Ng <me@amos.ng>
1 year ago
James Briggs 3aa53b44dd
added i_end in batch extraction (#907)
Fix for issue #906 

Switches `[i : i + batch_size]` to `[i : i_end]` in Pinecone
`from_texts` method
1 year ago
Harrison Chase 82c080c6e6
bump version to 0078 (#908) 1 year ago
Harrison Chase 71e662e88d
update docs (#905) 1 year ago
Harrison Chase 53d56d7650
Harrison/unstructured support (#903) 1 year ago
Harrison Chase 2a68be3e8d
chat vector db chain (#902) 1 year ago
James Briggs 8217a2f26c
Update pinecone init details in docs (#898)
PR to fix outdated environment details in the docs, see issue #897 

I added code comments as pointers to where users go to get API keys, and
where they can find the relevant environment variable.
1 year ago
Bagatur 7658263bfb
Check type of LLM.generate `prompts` arg (#886)
Was passing prompt in directly as string and getting nonsense outputs.
Had to inspect source code to realize that first arg should be a list.
Could be nice if there was an explicit error or warning, seems like this
could be a common mistake.
1 year ago
Samantha Whitmore 32b11101d3
Get elements of ActionInput on newlines (#889)
The re.DOTALL flag in Python's re (regular expression) module makes the
. (dot) metacharacter match newline characters as well as any other
character.

Without re.DOTALL, the . metacharacter only matches any character except
for a newline character. With re.DOTALL, the . metacharacter matches any
character, including newline characters.
1 year ago
Harrison Chase 1614c5f5fd
fix flaky tests (#892) 1 year ago
Harrison Chase a2b699dcd2
prompt template from string (#884) 1 year ago
Alex 7cc44b3bdb
Add to gallery (#882) 1 year ago
Harrison Chase 0b9f086d36
Harrison/docs splitter (#879) 1 year ago
Harrison Chase bcfbc7a818
version 0077 (#878) 1 year ago
Ryan Walker 1dd0733515
Fix small typo in getting started docs (#876)
Just noticed this little typo while reading the docs, thought I'd open a
PR!
1 year ago
Zach Schillaci 4c79100b15
Correct prompt typo + update example for SQLDatabaseChain (#868)
See https://github.com/hwchase17/langchain/issues/821
1 year ago
Harrison Chase 777aaff841
fix routing to tiktoken encoder (#866) 1 year ago
Harrison Chase e9ef08862d
validate template (#865) 1 year ago
Harrison Chase 364b771743
sql return direct (#864) 1 year ago
Harrison Chase 483441d305
pass kwargs through to loading (#863) 1 year ago
Harrison Chase 8df6b68093
fix length based example selector (#862) 1 year ago
Harrison Chase 3f48eed5bd
Harrison/milvus (#856)
Signed-off-by: Filip Haltmayer <filip.haltmayer@zilliz.com>
Signed-off-by: Frank Liu <frank.liu@zilliz.com>
Co-authored-by: Filip Haltmayer <81822489+filip-halt@users.noreply.github.com>
Co-authored-by: Frank Liu <frank@frankzliu.com>
1 year ago
Ankush Gola 933441cc52
Add retry to OpenAI llm (#849)
add ability to retry when certain exceptions are raised by
`openai.Completions.create`

Test plan: ran all OpenAI integration tests.
1 year ago
kahkeng 4a8f5cdf4b
Add alternative token-based text splitter (#816)
This does not involve a separator, and will naively chunk input text at
the appropriate boundaries in token space.

This is helpful if we have strict token length limits that we need to
strictly follow the specified chunk size, and we can't use aggressive
separators like spaces to guarantee the absence of long strings.

CharacterTextSplitter will let these strings through without splitting
them, which could cause overflow errors downstream.

Splitting at arbitrary token boundaries is not ideal but is hopefully
mitigated by having a decent overlap quantity. Also this results in
chunks which has exact number of tokens desired, instead of sometimes
overcounting if we concatenate shorter strings.

Potentially also helps with #528.
1 year ago
Harrison Chase 523ad2e6bd
vercel deployments (#850) 1 year ago