Commit Graph

263 Commits

Author SHA1 Message Date
Harrison Chase
f8b605293f
Harrison/improve memory (#432)
add AI prefix

add new type of memory

Co-authored-by: Jason <chisanch@usc.edu>
2022-12-27 08:23:51 -05:00
Harrison Chase
150b67de10
Harrison/weaviate improvements (#433)
Co-authored-by: Connor Shorten <connorshorten300@gmail.com>
2022-12-27 08:23:13 -05:00
Harrison Chase
b7566b5ec3
Harrison/return intermediate steps (#428) 2022-12-27 08:22:48 -05:00
Harrison Chase
7fc4b4b3e1
Harrison/ver 0048 (#429) 2022-12-26 11:36:49 -05:00
Harrison Chase
b50a56830d
Harrison/evaluation notebook (#426) 2022-12-26 09:16:37 -05:00
Harrison Chase
97f4000d3a
fix react docstore (#427) 2022-12-26 08:46:38 -05:00
Ikko Ashimine
9ae1d75318
Update integrations.md (#424)
HuggingFace -> Hugging Face
2022-12-25 23:03:05 -05:00
Harrison Chase
f9562d7f1c
version 0047 (#423) 2022-12-25 11:17:41 -05:00
Harrison Chase
ee3b8e89b3
better parsing of agent output (#418) 2022-12-25 09:53:36 -05:00
Harrison Chase
0d7aa1ee99
Harrison/docs to index (#419)
Add method for going directly from documents to VectorStores

Update notebook to showcase this functionality
2022-12-25 09:53:07 -05:00
Harrison Chase
48ae981d69
Harrison/multi input tools (#421)
add documentation on how to use tools that require multiple inputs
2022-12-25 09:52:48 -05:00
Andrew Wang
4416dc9d5d
Update prompt_serialization.ipynb (#417)
Fix typo.
Originally "support methods are..."
Now "support methods *that* are.."
2022-12-24 17:53:11 -05:00
Harrison Chase
22dd743eba
Harrison/version 0046 (#416) 2022-12-24 10:46:23 -05:00
Harrison Chase
01d06c1f9f
check memory variables (#411)
can have multiple input keys, if some come from memory
2022-12-24 08:36:06 -05:00
Harrison Chase
20959d8c36
check memory variables (#411)
can have multiple input keys, if some come from memory
2022-12-24 08:35:46 -05:00
altryne
f990395211
Readme typos (#409)
I was honored by the twitter mention, so used PyCharm to try and... help
docs even a little bit.
Mostly typo-s and correct spellings. 

PyCharm really complains about "really good" being used all the time and
recommended alternative wordings haha
2022-12-23 13:13:07 -05:00
Harrison Chase
2ad285aab2
bump version to 0045 (#408) 2022-12-23 11:19:30 -05:00
Shreya Rajpal
f40b3ce347
Updated VectorDBQA docs to updated argument name (#405) 2022-12-23 10:52:20 -05:00
Dheeraj Agrawal
ea3da9a469
Fix documentation error langchain explanation of combine_docs.md (#404)
This PR is regarding the issue here -
https://github.com/hwchase17/langchain/issues/403
2022-12-23 08:54:26 -05:00
Harrison Chase
77e1743341
update example (#402) 2022-12-22 17:09:47 -05:00
Keiji Kanazawa
5528265142
Add macOS .DS_Store to .gitignore (#401)
These are macOS specific files left around in directories (to save
user's display settings)
2022-12-22 13:05:57 -05:00
Samantha Whitmore
6bc8ae63ef
Add Redis cache implementation (#397)
I'm using a hash function for the key just to make sure its length
doesn't get out of hand, otherwise the implementation is quite similar.
2022-12-22 12:31:27 -05:00
Harrison Chase
ff03242fa0
Harrison/ver 044 (#400) 2022-12-22 11:20:18 -05:00
mrbean
136f759492
Mrbean/support timeout (#398)
Add support for passing in a request timeout to the API
2022-12-21 23:39:07 -05:00
Harrison Chase
6b60c509ac
(WIP) add HyDE (#393)
Co-authored-by: cameronccohen <cameron.c.cohen@gmail.com>
Co-authored-by: Cameron Cohen <cameron.cohen@quantco.com>
2022-12-21 20:46:41 -05:00
Keiji Kanazawa
543db9c2df
Add Azure OpenAI LLM (#395)
Hi!  This PR adds support for the Azure OpenAI service to LangChain.

I've tried to follow the contributing guidelines.

Co-authored-by: Keiji Kanazawa <{ID}+{username}@users.noreply.github.com>
2022-12-21 20:45:37 -05:00
Harrison Chase
bb76440bfa
bump version to 0.0.43 (#394) 2022-12-20 22:28:29 -05:00
Harrison Chase
c104d507bf
Harrison/improve data augmented generation docs (#390)
Co-authored-by: cameronccohen <cameron.c.cohen@gmail.com>
Co-authored-by: Cameron Cohen <cameron.cohen@quantco.com>
2022-12-20 22:24:08 -05:00
Harrison Chase
ad4414b59f
update docs (#389) 2022-12-20 09:32:10 -05:00
Harrison Chase
c8b4b54479
bump version to 0.0.42 (#388) 2022-12-19 20:59:34 -05:00
Harrison Chase
47ba34c83a
split up and improve agent docs (#387) 2022-12-19 20:32:45 -05:00
Abi Raja
467aa0cee0
Fix typo in docs (#386) 2022-12-19 17:39:44 -05:00
Harrison Chase
6be5747466
RFC: add cache override to LLM class (#379) 2022-12-19 17:36:14 -05:00
Harrison Chase
46c428234f
MMR example selector (#377)
implement max marginal relevance example selector
2022-12-19 17:09:27 -05:00
Harrison Chase
ffed5e0056
Harrison/jinja formatter (#385)
Co-authored-by: Benjamin <BenderV@users.noreply.github.com>
2022-12-19 16:40:39 -05:00
mrbean
fc66a32c6f
fix docstring (#383)
![Screenshot 2022-12-19 at 11 06 48
AM](https://user-images.githubusercontent.com/43734688/208468970-5cb9bafb-f535-486e-b41f-312a2f9ffffb.png)
2022-12-19 11:10:17 -05:00
Harrison Chase
a01d3e6955
fix agent memory docs (#382) 2022-12-19 09:15:32 -05:00
Harrison Chase
766b84a9d9
upgrade version to 0041 (#378) 2022-12-18 22:33:03 -05:00
Harrison Chase
cf98f219f9
Harrison/tools exp (#372) 2022-12-18 21:51:23 -05:00
Harrison Chase
e7b625fe03
fix text splitter (#375) 2022-12-18 20:21:43 -05:00
Harrison Chase
3474f39e21
Harrison/improve cache (#368)
make it so everything goes through generate, which removes the need for
two types of caches
2022-12-18 16:22:42 -05:00
Ankush Gola
8d0869c6d3
change run to use args and kwargs (#367)
Before, `run` was not able to be called with multiple arguments. This
expands the functionality.
2022-12-18 15:54:56 -05:00
Harrison Chase
a7084ad6e4
Harrison/version 0040 (#366) 2022-12-17 07:53:22 -08:00
mrbean
50257fce59
Support Streaming Tokens from OpenAI (#364)
https://github.com/hwchase17/langchain/issues/363

@hwchase17 how much does this make you want to cry?
2022-12-17 07:02:58 -08:00
mrbean
fe6695b9e7
Add HuggingFacePipeline LLM (#353)
https://github.com/hwchase17/langchain/issues/354

Add support for running your own HF pipeline locally. This would allow
you to get a lot more dynamic with what HF features and models you
support since you wouldn't be beholden to what is hosted in HF hub. You
could also do stuff with HF Optimum to quantize your models and stuff to
get pretty fast inference even running on a laptop.
2022-12-17 07:00:04 -08:00
Harrison Chase
2eef76ed3f
fix documentation (#365) 2022-12-16 16:48:54 -08:00
Benjamin
85c1bd2cd0
add sqlalchemy generic cache (#361)
Created a generic SQLAlchemyCache class to plug any database supported
by SQAlchemy. (I am using Postgres).
I also based the class SQLiteCache class on this class SQLAlchemyCache.

As a side note, I'm questioning the need for two distinct class
LLMCache, FullLLMCache. Shouldn't we merge both ?
2022-12-16 16:47:23 -08:00
Harrison Chase
809a9f485f
Harrison/new version (#362) 2022-12-16 07:42:31 -08:00
Harrison Chase
750edfb440
add optional collapse prompt (#358) 2022-12-16 06:25:29 -08:00
Harrison Chase
2dd895d98c
add openai tokenizer (#355) 2022-12-15 22:35:42 -08:00