Harrison Chase
2ad285aab2
bump version to 0045 ( #408 )
2022-12-23 11:19:30 -05:00
Shreya Rajpal
f40b3ce347
Updated VectorDBQA docs to updated argument name ( #405 )
2022-12-23 10:52:20 -05:00
Dheeraj Agrawal
ea3da9a469
Fix documentation error langchain explanation of combine_docs.md ( #404 )
...
This PR is regarding the issue here -
https://github.com/hwchase17/langchain/issues/403
2022-12-23 08:54:26 -05:00
Harrison Chase
77e1743341
update example ( #402 )
2022-12-22 17:09:47 -05:00
Keiji Kanazawa
5528265142
Add macOS .DS_Store to .gitignore ( #401 )
...
These are macOS specific files left around in directories (to save
user's display settings)
2022-12-22 13:05:57 -05:00
Samantha Whitmore
6bc8ae63ef
Add Redis cache implementation ( #397 )
...
I'm using a hash function for the key just to make sure its length
doesn't get out of hand, otherwise the implementation is quite similar.
2022-12-22 12:31:27 -05:00
Harrison Chase
ff03242fa0
Harrison/ver 044 ( #400 )
2022-12-22 11:20:18 -05:00
mrbean
136f759492
Mrbean/support timeout ( #398 )
...
Add support for passing in a request timeout to the API
2022-12-21 23:39:07 -05:00
Harrison Chase
6b60c509ac
(WIP) add HyDE ( #393 )
...
Co-authored-by: cameronccohen <cameron.c.cohen@gmail.com>
Co-authored-by: Cameron Cohen <cameron.cohen@quantco.com>
2022-12-21 20:46:41 -05:00
Keiji Kanazawa
543db9c2df
Add Azure OpenAI LLM ( #395 )
...
Hi! This PR adds support for the Azure OpenAI service to LangChain.
I've tried to follow the contributing guidelines.
Co-authored-by: Keiji Kanazawa <{ID}+{username}@users.noreply.github.com>
2022-12-21 20:45:37 -05:00
Harrison Chase
bb76440bfa
bump version to 0.0.43 ( #394 )
2022-12-20 22:28:29 -05:00
Harrison Chase
c104d507bf
Harrison/improve data augmented generation docs ( #390 )
...
Co-authored-by: cameronccohen <cameron.c.cohen@gmail.com>
Co-authored-by: Cameron Cohen <cameron.cohen@quantco.com>
2022-12-20 22:24:08 -05:00
Harrison Chase
ad4414b59f
update docs ( #389 )
2022-12-20 09:32:10 -05:00
Harrison Chase
c8b4b54479
bump version to 0.0.42 ( #388 )
2022-12-19 20:59:34 -05:00
Harrison Chase
47ba34c83a
split up and improve agent docs ( #387 )
2022-12-19 20:32:45 -05:00
Abi Raja
467aa0cee0
Fix typo in docs ( #386 )
2022-12-19 17:39:44 -05:00
Harrison Chase
6be5747466
RFC: add cache override to LLM class ( #379 )
2022-12-19 17:36:14 -05:00
Harrison Chase
46c428234f
MMR example selector ( #377 )
...
implement max marginal relevance example selector
2022-12-19 17:09:27 -05:00
Harrison Chase
ffed5e0056
Harrison/jinja formatter ( #385 )
...
Co-authored-by: Benjamin <BenderV@users.noreply.github.com>
2022-12-19 16:40:39 -05:00
mrbean
fc66a32c6f
fix docstring ( #383 )
...
![Screenshot 2022-12-19 at 11 06 48
AM](https://user-images.githubusercontent.com/43734688/208468970-5cb9bafb-f535-486e-b41f-312a2f9ffffb.png )
2022-12-19 11:10:17 -05:00
Harrison Chase
a01d3e6955
fix agent memory docs ( #382 )
2022-12-19 09:15:32 -05:00
Harrison Chase
766b84a9d9
upgrade version to 0041 ( #378 )
2022-12-18 22:33:03 -05:00
Harrison Chase
cf98f219f9
Harrison/tools exp ( #372 )
2022-12-18 21:51:23 -05:00
Harrison Chase
e7b625fe03
fix text splitter ( #375 )
2022-12-18 20:21:43 -05:00
Harrison Chase
3474f39e21
Harrison/improve cache ( #368 )
...
make it so everything goes through generate, which removes the need for
two types of caches
2022-12-18 16:22:42 -05:00
Ankush Gola
8d0869c6d3
change run to use args and kwargs ( #367 )
...
Before, `run` was not able to be called with multiple arguments. This
expands the functionality.
2022-12-18 15:54:56 -05:00
Harrison Chase
a7084ad6e4
Harrison/version 0040 ( #366 )
2022-12-17 07:53:22 -08:00
mrbean
50257fce59
Support Streaming Tokens from OpenAI ( #364 )
...
https://github.com/hwchase17/langchain/issues/363
@hwchase17 how much does this make you want to cry?
2022-12-17 07:02:58 -08:00
mrbean
fe6695b9e7
Add HuggingFacePipeline LLM ( #353 )
...
https://github.com/hwchase17/langchain/issues/354
Add support for running your own HF pipeline locally. This would allow
you to get a lot more dynamic with what HF features and models you
support since you wouldn't be beholden to what is hosted in HF hub. You
could also do stuff with HF Optimum to quantize your models and stuff to
get pretty fast inference even running on a laptop.
2022-12-17 07:00:04 -08:00
Harrison Chase
2eef76ed3f
fix documentation ( #365 )
2022-12-16 16:48:54 -08:00
Benjamin
85c1bd2cd0
add sqlalchemy generic cache ( #361 )
...
Created a generic SQLAlchemyCache class to plug any database supported
by SQAlchemy. (I am using Postgres).
I also based the class SQLiteCache class on this class SQLAlchemyCache.
As a side note, I'm questioning the need for two distinct class
LLMCache, FullLLMCache. Shouldn't we merge both ?
2022-12-16 16:47:23 -08:00
Harrison Chase
809a9f485f
Harrison/new version ( #362 )
2022-12-16 07:42:31 -08:00
Harrison Chase
750edfb440
add optional collapse prompt ( #358 )
2022-12-16 06:25:29 -08:00
Harrison Chase
2dd895d98c
add openai tokenizer ( #355 )
2022-12-15 22:35:42 -08:00
Harrison Chase
c1b50b7b13
Harrison/map reduce merge ( #344 )
...
Co-authored-by: John Nay <JohnNay@users.noreply.github.com>
2022-12-15 17:49:14 -08:00
Harrison Chase
ed143b598f
improve openai embeddings ( #351 )
...
add more formal support for explicitly specifying each model, but in a
backwards compatible way
2022-12-15 17:01:39 -08:00
Harrison Chase
428508bd75
bump version to 0.0.38 ( #349 )
2022-12-15 08:27:20 -08:00
Harrison Chase
78b31e5966
Harrison/cache ( #343 )
2022-12-15 07:53:32 -08:00
Harrison Chase
8cf62ce06e
Harrison/single input ( #347 )
...
allow passing of single input into chain
Co-authored-by: thepok <richterthepok@yahoo.de>
2022-12-15 07:52:51 -08:00
Harrison Chase
5161ae7e08
add new example ( #345 )
2022-12-14 22:31:34 -08:00
Harrison Chase
8c167627ed
bump version ( #340 )
2022-12-14 10:38:31 -08:00
Harrison Chase
e26b6f9c89
fix batching ( #339 )
2022-12-14 08:25:37 -08:00
Harrison Chase
3c6796b72e
bump version to 0036 ( #333 )
2022-12-13 08:17:41 -08:00
Harrison Chase
996b5a3dfb
Harrison/llm final stuff ( #332 )
2022-12-13 07:50:46 -08:00
Harrison Chase
9bb7195085
Harrison/llm saving ( #331 )
...
Co-authored-by: Akash Samant <70665700+asamant21@users.noreply.github.com>
2022-12-13 06:46:01 -08:00
Harrison Chase
595cc1ae1a
RFC: more complete return ( #313 )
...
Co-authored-by: Andrew Williamson <awilliamson10@indstate.edu>
Co-authored-by: awilliamson10 <aw.williamson10@gmail.com>
2022-12-13 05:50:03 -08:00
Hunter Gerlach
482611f426
unit test / code coverage improvements ( #322 )
...
This PR has two contributions:
1. Add test for when stop token is found in middle of text
2. Add code coverage tooling and instructions
- Add pytest-cov via poetry
- Add necessary config files
- Add new make instruction for `coverage`
- Update README with coverage guidance
- Update minor README formatting/spelling
Co-authored-by: Hunter Gerlach <hunter@huntergerlach.com>
2022-12-13 05:48:53 -08:00
Harrison Chase
8861770bd0
expose get_num_tokens method ( #327 )
2022-12-13 05:22:42 -08:00
Ankush Gola
8fdcdf4c2f
add .idea files to gitignore, add zsh note to installation docs ( #329 )
2022-12-13 05:20:22 -08:00
thepok
137356dbec
-1 max token description for openai ( #330 )
2022-12-13 05:15:51 -08:00