Commit Graph

2320 Commits

Author SHA1 Message Date
mrbean
50257fce59
Support Streaming Tokens from OpenAI (#364)
https://github.com/hwchase17/langchain/issues/363

@hwchase17 how much does this make you want to cry?
2022-12-17 07:02:58 -08:00
mrbean
fe6695b9e7
Add HuggingFacePipeline LLM (#353)
https://github.com/hwchase17/langchain/issues/354

Add support for running your own HF pipeline locally. This would allow
you to get a lot more dynamic with what HF features and models you
support since you wouldn't be beholden to what is hosted in HF hub. You
could also do stuff with HF Optimum to quantize your models and stuff to
get pretty fast inference even running on a laptop.
2022-12-17 07:00:04 -08:00
Harrison Chase
2eef76ed3f
fix documentation (#365) 2022-12-16 16:48:54 -08:00
Benjamin
85c1bd2cd0
add sqlalchemy generic cache (#361)
Created a generic SQLAlchemyCache class to plug any database supported
by SQAlchemy. (I am using Postgres).
I also based the class SQLiteCache class on this class SQLAlchemyCache.

As a side note, I'm questioning the need for two distinct class
LLMCache, FullLLMCache. Shouldn't we merge both ?
2022-12-16 16:47:23 -08:00
Harrison Chase
809a9f485f
Harrison/new version (#362) 2022-12-16 07:42:31 -08:00
Harrison Chase
750edfb440
add optional collapse prompt (#358) 2022-12-16 06:25:29 -08:00
Harrison Chase
2dd895d98c
add openai tokenizer (#355) 2022-12-15 22:35:42 -08:00
Harrison Chase
c1b50b7b13
Harrison/map reduce merge (#344)
Co-authored-by: John Nay <JohnNay@users.noreply.github.com>
2022-12-15 17:49:14 -08:00
Harrison Chase
ed143b598f
improve openai embeddings (#351)
add more formal support for explicitly specifying each model, but in a
backwards compatible way
2022-12-15 17:01:39 -08:00
Harrison Chase
428508bd75
bump version to 0.0.38 (#349) 2022-12-15 08:27:20 -08:00
Harrison Chase
78b31e5966
Harrison/cache (#343) 2022-12-15 07:53:32 -08:00
Harrison Chase
8cf62ce06e
Harrison/single input (#347)
allow passing of single input into chain

Co-authored-by: thepok <richterthepok@yahoo.de>
2022-12-15 07:52:51 -08:00
Harrison Chase
5161ae7e08
add new example (#345) 2022-12-14 22:31:34 -08:00
Harrison Chase
8c167627ed
bump version (#340) 2022-12-14 10:38:31 -08:00
Harrison Chase
e26b6f9c89
fix batching (#339) 2022-12-14 08:25:37 -08:00
Harrison Chase
3c6796b72e
bump version to 0036 (#333) 2022-12-13 08:17:41 -08:00
Harrison Chase
996b5a3dfb
Harrison/llm final stuff (#332) 2022-12-13 07:50:46 -08:00
Harrison Chase
9bb7195085
Harrison/llm saving (#331)
Co-authored-by: Akash Samant <70665700+asamant21@users.noreply.github.com>
2022-12-13 06:46:01 -08:00
Harrison Chase
595cc1ae1a
RFC: more complete return (#313)
Co-authored-by: Andrew Williamson <awilliamson10@indstate.edu>
Co-authored-by: awilliamson10 <aw.williamson10@gmail.com>
2022-12-13 05:50:03 -08:00
Hunter Gerlach
482611f426
unit test / code coverage improvements (#322)
This PR has two contributions:

1. Add test for when stop token is found in middle of text

2. Add code coverage tooling and instructions
- Add pytest-cov via poetry
- Add necessary config files
- Add new make instruction for `coverage`
- Update README with coverage guidance
- Update minor README formatting/spelling

Co-authored-by: Hunter Gerlach <hunter@huntergerlach.com>
2022-12-13 05:48:53 -08:00
Harrison Chase
8861770bd0
expose get_num_tokens method (#327) 2022-12-13 05:22:42 -08:00
Ankush Gola
8fdcdf4c2f
add .idea files to gitignore, add zsh note to installation docs (#329) 2022-12-13 05:20:22 -08:00
thepok
137356dbec
-1 max token description for openai (#330) 2022-12-13 05:15:51 -08:00
Christian Clauss
2fbb152386
Add Python 3.11 to the testing (#324) 2022-12-12 07:19:52 -08:00
Christian Clauss
d946be2f3d
Add Python 3.11 to the testing (#323) 2022-12-12 06:09:08 -08:00
Harrison Chase
292f1cfa96
Harrison/add contributing docs (#315) 2022-12-12 06:07:40 -08:00
Harrison Chase
948e999eff
bump version to 0035 (#312) 2022-12-11 11:07:30 -08:00
Harrison Chase
a7c8e37e77
Harrison/token counts (#311)
Co-authored-by: thepok <richterthepok@yahoo.de>
2022-12-11 07:43:40 -08:00
Shobith Alva
19a9fa16a9
Add clear() method for Memory (#305)
a simple helper to clear the buffer in `Conversation*Memory` classes
2022-12-11 07:09:06 -08:00
Harrison Chase
e02d6b2288
beta: logger (#307) 2022-12-10 23:17:19 -08:00
Harrison Chase
36b4c58acf
expose more stuff (#306) 2022-12-10 23:16:32 -08:00
Harrison Chase
7827f0a844
fix typing (int -> float) (#308) 2022-12-10 20:31:55 -08:00
Hunter Gerlach
9ee6115deb
Minor grammar fixes for memory docs to improve readability (#303)
Nothing of substance was changed. I simply corrected a few minor errors
that could slow down the reader.

Co-authored-by: Hunter Gerlach <hunter@huntergerlach.com>
2022-12-10 16:18:01 -08:00
Harrison Chase
9d08384d5f
Harrison/bump version (#300) 2022-12-10 09:37:42 -08:00
Harrison Chase
853894dd47
add moderation chain (#299) 2022-12-10 09:19:16 -08:00
andersenchen
5267ebce2d
Add LLMCheckerChain (#281)
Implementation of https://github.com/jagilley/fact-checker. Works pretty
well.

<img width="993" alt="Screenshot 2022-12-07 at 4 41 47 PM"
src="https://user-images.githubusercontent.com/101075607/206302751-356a19ff-d000-4798-9aee-9c38b7f532b9.png">

Verifying this manually:
1. "Only two kinds of egg-laying mammals are left on the planet
today—the duck-billed platypus and the echidna, or spiny anteater."
https://www.scientificamerican.com/article/extreme-monotremes/
2. "An [Echidna] egg weighs 1.5 to 2 grams (0.05 to 0.07
oz)[[19]](https://en.wikipedia.org/wiki/Echidna#cite_note-19) and is
about 1.4 centimetres (0.55 in) long."
https://en.wikipedia.org/wiki/Echidna#:~:text=sleep%20is%20suppressed.-,Reproduction,a%20reptile%2Dlike%20egg%20tooth.
3. "A [platypus] lays one to three (usually two) small, leathery eggs
(similar to those of reptiles), about 11 mm (7⁄16 in) in diameter and
slightly rounder than bird eggs."
https://en.wikipedia.org/wiki/Platypus#:~:text=It%20lays%20one%20to%20three,slightly%20rounder%20than%20bird%20eggs.
4. Therefore, an Echidna is the mammal that lays the biggest eggs.


cc @hwchase17
2022-12-09 12:49:05 -08:00
Harrison Chase
43c9bd869f
add memprompt docs (#294) 2022-12-09 12:40:24 -08:00
Ben
0f399350f1
Fix typo in Getting Started / LLM Chains docs (#291)
I noticed this typo when reading the getting started guide, hope this
fix makes sense.
2022-12-09 06:48:02 -08:00
Harrison Chase
85c66dc6a4
bump version to 0033 (#290) 2022-12-09 06:47:49 -08:00
Samantha Whitmore
b10be842f6
ChatGPT Clone: adding ConversationBufferWindowMemory to replicate vir… (#288)
…tual env example
2022-12-08 23:01:08 -08:00
Harrison Chase
e2e501aa06
Harrison/version 0032 (#283) 2022-12-08 07:59:58 -08:00
Harrison Chase
e9b1c8cdfa
Harrison/base combine doc chain (#264) 2022-12-07 22:56:26 -08:00
Harrison Chase
c27a6fa8a4
update docs (#278) 2022-12-07 08:40:08 -08:00
Harrison Chase
1690292b09
bump version to 0031 (#276) 2022-12-07 07:29:08 -08:00
Harrison Chase
834b391792
update notebooks (#275) 2022-12-06 22:55:27 -08:00
Harrison Chase
3c1c7ba672
update branch name in gha (#274) 2022-12-06 22:28:50 -08:00
Akash Samant
48b093823e
Add a Transformation Chain (#257)
Arbitrary transformation chains that can be used to add dictionary
extractions from llms/other chains
2022-12-06 21:58:16 -08:00
coyotespike
b7bef36ee1
BashChain (#260)
Love the project, a ton of fun!

I think the PR is pretty self-explanatory, happy to make any changes! I
am working on using it in an `LLMBashChain` and may update as that
progresses.

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2022-12-06 21:57:50 -08:00
Harrison Chase
28be37f470
LLMRequestsChain (#267) 2022-12-06 21:55:02 -08:00
John McDonnell
68666d6a22
Gracefully degrade when model asks for nonexistent tool (#268)
Not yet tested, but very simple change, assumption is that we're cool
with just producing a generic output when tool is not found
2022-12-06 21:52:48 -08:00