Commit Graph

1733 Commits

Author SHA1 Message Date
Harrison Chase
47ba34c83a
split up and improve agent docs (#387) 2022-12-19 20:32:45 -05:00
Abi Raja
467aa0cee0
Fix typo in docs (#386) 2022-12-19 17:39:44 -05:00
Harrison Chase
6be5747466
RFC: add cache override to LLM class (#379) 2022-12-19 17:36:14 -05:00
Harrison Chase
46c428234f
MMR example selector (#377)
implement max marginal relevance example selector
2022-12-19 17:09:27 -05:00
Harrison Chase
ffed5e0056
Harrison/jinja formatter (#385)
Co-authored-by: Benjamin <BenderV@users.noreply.github.com>
2022-12-19 16:40:39 -05:00
mrbean
fc66a32c6f
fix docstring (#383)
![Screenshot 2022-12-19 at 11 06 48
AM](https://user-images.githubusercontent.com/43734688/208468970-5cb9bafb-f535-486e-b41f-312a2f9ffffb.png)
2022-12-19 11:10:17 -05:00
Harrison Chase
a01d3e6955
fix agent memory docs (#382) 2022-12-19 09:15:32 -05:00
Harrison Chase
766b84a9d9
upgrade version to 0041 (#378) 2022-12-18 22:33:03 -05:00
Harrison Chase
cf98f219f9
Harrison/tools exp (#372) 2022-12-18 21:51:23 -05:00
Harrison Chase
e7b625fe03
fix text splitter (#375) 2022-12-18 20:21:43 -05:00
Harrison Chase
3474f39e21
Harrison/improve cache (#368)
make it so everything goes through generate, which removes the need for
two types of caches
2022-12-18 16:22:42 -05:00
Ankush Gola
8d0869c6d3
change run to use args and kwargs (#367)
Before, `run` was not able to be called with multiple arguments. This
expands the functionality.
2022-12-18 15:54:56 -05:00
Harrison Chase
a7084ad6e4
Harrison/version 0040 (#366) 2022-12-17 07:53:22 -08:00
mrbean
50257fce59
Support Streaming Tokens from OpenAI (#364)
https://github.com/hwchase17/langchain/issues/363

@hwchase17 how much does this make you want to cry?
2022-12-17 07:02:58 -08:00
mrbean
fe6695b9e7
Add HuggingFacePipeline LLM (#353)
https://github.com/hwchase17/langchain/issues/354

Add support for running your own HF pipeline locally. This would allow
you to get a lot more dynamic with what HF features and models you
support since you wouldn't be beholden to what is hosted in HF hub. You
could also do stuff with HF Optimum to quantize your models and stuff to
get pretty fast inference even running on a laptop.
2022-12-17 07:00:04 -08:00
Harrison Chase
2eef76ed3f
fix documentation (#365) 2022-12-16 16:48:54 -08:00
Benjamin
85c1bd2cd0
add sqlalchemy generic cache (#361)
Created a generic SQLAlchemyCache class to plug any database supported
by SQAlchemy. (I am using Postgres).
I also based the class SQLiteCache class on this class SQLAlchemyCache.

As a side note, I'm questioning the need for two distinct class
LLMCache, FullLLMCache. Shouldn't we merge both ?
2022-12-16 16:47:23 -08:00
Harrison Chase
809a9f485f
Harrison/new version (#362) 2022-12-16 07:42:31 -08:00
Harrison Chase
750edfb440
add optional collapse prompt (#358) 2022-12-16 06:25:29 -08:00
Harrison Chase
2dd895d98c
add openai tokenizer (#355) 2022-12-15 22:35:42 -08:00
Harrison Chase
c1b50b7b13
Harrison/map reduce merge (#344)
Co-authored-by: John Nay <JohnNay@users.noreply.github.com>
2022-12-15 17:49:14 -08:00
Harrison Chase
ed143b598f
improve openai embeddings (#351)
add more formal support for explicitly specifying each model, but in a
backwards compatible way
2022-12-15 17:01:39 -08:00
Harrison Chase
428508bd75
bump version to 0.0.38 (#349) 2022-12-15 08:27:20 -08:00
Harrison Chase
78b31e5966
Harrison/cache (#343) 2022-12-15 07:53:32 -08:00
Harrison Chase
8cf62ce06e
Harrison/single input (#347)
allow passing of single input into chain

Co-authored-by: thepok <richterthepok@yahoo.de>
2022-12-15 07:52:51 -08:00
Harrison Chase
5161ae7e08
add new example (#345) 2022-12-14 22:31:34 -08:00
Harrison Chase
8c167627ed
bump version (#340) 2022-12-14 10:38:31 -08:00
Harrison Chase
e26b6f9c89
fix batching (#339) 2022-12-14 08:25:37 -08:00
Harrison Chase
3c6796b72e
bump version to 0036 (#333) 2022-12-13 08:17:41 -08:00
Harrison Chase
996b5a3dfb
Harrison/llm final stuff (#332) 2022-12-13 07:50:46 -08:00
Harrison Chase
9bb7195085
Harrison/llm saving (#331)
Co-authored-by: Akash Samant <70665700+asamant21@users.noreply.github.com>
2022-12-13 06:46:01 -08:00
Harrison Chase
595cc1ae1a
RFC: more complete return (#313)
Co-authored-by: Andrew Williamson <awilliamson10@indstate.edu>
Co-authored-by: awilliamson10 <aw.williamson10@gmail.com>
2022-12-13 05:50:03 -08:00
Hunter Gerlach
482611f426
unit test / code coverage improvements (#322)
This PR has two contributions:

1. Add test for when stop token is found in middle of text

2. Add code coverage tooling and instructions
- Add pytest-cov via poetry
- Add necessary config files
- Add new make instruction for `coverage`
- Update README with coverage guidance
- Update minor README formatting/spelling

Co-authored-by: Hunter Gerlach <hunter@huntergerlach.com>
2022-12-13 05:48:53 -08:00
Harrison Chase
8861770bd0
expose get_num_tokens method (#327) 2022-12-13 05:22:42 -08:00
Ankush Gola
8fdcdf4c2f
add .idea files to gitignore, add zsh note to installation docs (#329) 2022-12-13 05:20:22 -08:00
thepok
137356dbec
-1 max token description for openai (#330) 2022-12-13 05:15:51 -08:00
Christian Clauss
2fbb152386
Add Python 3.11 to the testing (#324) 2022-12-12 07:19:52 -08:00
Christian Clauss
d946be2f3d
Add Python 3.11 to the testing (#323) 2022-12-12 06:09:08 -08:00
Harrison Chase
292f1cfa96
Harrison/add contributing docs (#315) 2022-12-12 06:07:40 -08:00
Harrison Chase
948e999eff
bump version to 0035 (#312) 2022-12-11 11:07:30 -08:00
Harrison Chase
a7c8e37e77
Harrison/token counts (#311)
Co-authored-by: thepok <richterthepok@yahoo.de>
2022-12-11 07:43:40 -08:00
Shobith Alva
19a9fa16a9
Add clear() method for Memory (#305)
a simple helper to clear the buffer in `Conversation*Memory` classes
2022-12-11 07:09:06 -08:00
Harrison Chase
e02d6b2288
beta: logger (#307) 2022-12-10 23:17:19 -08:00
Harrison Chase
36b4c58acf
expose more stuff (#306) 2022-12-10 23:16:32 -08:00
Harrison Chase
7827f0a844
fix typing (int -> float) (#308) 2022-12-10 20:31:55 -08:00
Hunter Gerlach
9ee6115deb
Minor grammar fixes for memory docs to improve readability (#303)
Nothing of substance was changed. I simply corrected a few minor errors
that could slow down the reader.

Co-authored-by: Hunter Gerlach <hunter@huntergerlach.com>
2022-12-10 16:18:01 -08:00
Harrison Chase
9d08384d5f
Harrison/bump version (#300) 2022-12-10 09:37:42 -08:00
Harrison Chase
853894dd47
add moderation chain (#299) 2022-12-10 09:19:16 -08:00
andersenchen
5267ebce2d
Add LLMCheckerChain (#281)
Implementation of https://github.com/jagilley/fact-checker. Works pretty
well.

<img width="993" alt="Screenshot 2022-12-07 at 4 41 47 PM"
src="https://user-images.githubusercontent.com/101075607/206302751-356a19ff-d000-4798-9aee-9c38b7f532b9.png">

Verifying this manually:
1. "Only two kinds of egg-laying mammals are left on the planet
today—the duck-billed platypus and the echidna, or spiny anteater."
https://www.scientificamerican.com/article/extreme-monotremes/
2. "An [Echidna] egg weighs 1.5 to 2 grams (0.05 to 0.07
oz)[[19]](https://en.wikipedia.org/wiki/Echidna#cite_note-19) and is
about 1.4 centimetres (0.55 in) long."
https://en.wikipedia.org/wiki/Echidna#:~:text=sleep%20is%20suppressed.-,Reproduction,a%20reptile%2Dlike%20egg%20tooth.
3. "A [platypus] lays one to three (usually two) small, leathery eggs
(similar to those of reptiles), about 11 mm (7⁄16 in) in diameter and
slightly rounder than bird eggs."
https://en.wikipedia.org/wiki/Platypus#:~:text=It%20lays%20one%20to%20three,slightly%20rounder%20than%20bird%20eggs.
4. Therefore, an Echidna is the mammal that lays the biggest eggs.


cc @hwchase17
2022-12-09 12:49:05 -08:00
Harrison Chase
43c9bd869f
add memprompt docs (#294) 2022-12-09 12:40:24 -08:00