Zander Chase
416f3bdf11
Vwp/alpaca streaming ( #3468 )
...
Co-authored-by: Luke Stanley <306671+lukestanley@users.noreply.github.com>
2023-04-24 16:27:51 -07:00
Mike Lambert
392f1b3218
Add Anthropic ChatModel to langchain ( #2293 )
...
* Adds an Anthropic ChatModel
* Factors out common code in our LLMModel and ChatModel
* Supports streaming llm-tokens to the callbacks on a delta basis (until
a future V2 API does that for us)
* Some fixes
2023-04-14 15:09:07 -07:00
Tim Asp
be4fb24b32
OpenAI LLM: update modelname_to_contextsize
with new models ( #2843 )
...
Token counts pulled from https://openai.com/pricing
2023-04-13 11:13:34 -07:00
Alex Rad
bd780a8223
Add support for rwkv ( #2422 )
...
This adds support for running RWKV with pytorch.
https://github.com/hwchase17/langchain/issues/2398
This does not yet support rwkv.cpp
2023-04-06 14:41:06 -07:00
Harrison Chase
0a9f04bad9
Harrison/gpt4all ( #2366 )
...
Co-authored-by: William FH <13333726+hinthornw@users.noreply.github.com>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2023-04-04 06:49:17 -07:00
Harrison Chase
d85f57ef9c
Harrison/llama ( #2314 )
...
Co-authored-by: RJ Adriaansen <adriaansen@eshcc.eur.nl>
2023-04-02 14:57:45 -07:00
Ankush Gola
ccee1aedd2
add async support for anthropic ( #2114 )
...
should not be merged in before
https://github.com/anthropics/anthropic-sdk-python/pull/11 gets released
2023-03-28 22:49:14 -04:00
Charlie Holtz
f16c1fb6df
Add replicate take 2 ( #2077 )
...
This PR adds a replicate integration to langchain.
It's an updated version of
https://github.com/hwchase17/langchain/pull/1993 , but with updates to
match latest replicate-python code.
https://github.com/replicate/replicate-python .
---------
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Co-authored-by: Zeke Sikelianos <zeke@sikelianos.com>
2023-03-28 11:56:57 -07:00
Ankush Gola
b7ebb8fe30
enable streaming in anthropic llm wrapper ( #2065 )
2023-03-27 20:25:00 -04:00
Mario Kostelac
aff44d0a98
(OpenAI) Add model_name to LLMResult.llm_output ( #1713 )
...
Given that different models have very different latencies and pricings,
it's benefitial to pass the information about the model that generated
the response. Such information allows implementing custom callback
managers and track usage and price per model.
Addresses https://github.com/hwchase17/langchain/issues/1557 .
2023-03-16 21:55:55 -07:00
Harrison Chase
3ee32a01ea
Harrison/prompt layer ( #1547 )
...
Co-authored-by: Jonathan Pedoeem <jonathanped@gmail.com>
Co-authored-by: AbuBakar <abubakarsohail123@gmail.com>
2023-03-08 21:24:27 -08:00
Ankush Gola
27104d4921
fix ChatOpenAI.agenerate
( #1504 )
2023-03-07 15:22:05 -08:00
Nuno Campos
499e76b199
Allow the regular openai class to be used for ChatGPT models ( #1393 )
...
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-03-02 09:04:18 -08:00
Ankush Gola
fe30be6fba
add async and streaming support to OpenAIChat
( #1378 )
...
title says it all
2023-03-01 21:55:43 -08:00
Enrico Shippole
9becdeaadf
Add Writer, Banana, Modal, StochasticAI ( #1270 )
...
Add LLM wrappers and examples for Banana, Writer, Modal, Stochastic AI
Added rigid json format for Banana and Modal
2023-02-24 06:58:58 -08:00
Dennis Antela Martinez
53c67e04d4
add aleph alpha llm ( #1207 )
...
Integrate Aleph Alpha's client into Langchain to provide access to the
luminous models - more info on latest benchmarks here:
https://www.aleph-alpha.com/luminous-performance-benchmarks
2023-02-22 10:37:36 -08:00
Harrison Chase
9d6d8f85da
Harrison/self hosted runhouse ( #1154 )
...
Co-authored-by: Donny Greenberg <dongreenberg2@gmail.com>
Co-authored-by: John Dagdelen <jdagdelen@users.noreply.github.com>
Co-authored-by: Harrison Chase <harrisonchase@Harrisons-MBP.attlocal.net>
Co-authored-by: Andrew White <white.d.andrew@gmail.com>
Co-authored-by: Peng Qu <82029664+pengqu123@users.noreply.github.com>
Co-authored-by: Matt Robinson <mthw.wm.robinson@gmail.com>
Co-authored-by: jeff <tangj1122@gmail.com>
Co-authored-by: Harrison Chase <harrisonchase@Harrisons-MacBook-Pro.local>
Co-authored-by: zanderchase <zander@unfold.ag>
Co-authored-by: Charles Frye <cfrye59@gmail.com>
Co-authored-by: zanderchase <zanderchase@gmail.com>
Co-authored-by: Shahriar Tajbakhsh <sh.tajbakhsh@gmail.com>
Co-authored-by: Stefan Keselj <skeselj@princeton.edu>
Co-authored-by: Francisco Ingham <fpingham@gmail.com>
Co-authored-by: Dhruv Anand <105786647+dhruv-anand-aintech@users.noreply.github.com>
Co-authored-by: cragwolfe <cragcw@gmail.com>
Co-authored-by: Anton Troynikov <atroyn@users.noreply.github.com>
Co-authored-by: William FH <13333726+hinthornw@users.noreply.github.com>
Co-authored-by: Oliver Klingefjord <oliver@klingefjord.com>
Co-authored-by: blob42 <contact@blob42.xyz>
Co-authored-by: blob42 <spike@w530>
Co-authored-by: Enrico Shippole <henryshippole@gmail.com>
Co-authored-by: Ibis Prevedello <ibiscp@gmail.com>
Co-authored-by: jped <jonathanped@gmail.com>
Co-authored-by: Justin Torre <justintorre75@gmail.com>
Co-authored-by: Ivan Vendrov <ivan@anthropic.com>
Co-authored-by: Sasmitha Manathunga <70096033+mmz-001@users.noreply.github.com>
Co-authored-by: Ankush Gola <9536492+agola11@users.noreply.github.com>
Co-authored-by: Matt Robinson <mrobinson@unstructuredai.io>
Co-authored-by: Jeff Huber <jeffchuber@gmail.com>
Co-authored-by: Akshay <64036106+akshayvkt@users.noreply.github.com>
Co-authored-by: Andrew Huang <jhuang16888@gmail.com>
Co-authored-by: rogerserper <124558887+rogerserper@users.noreply.github.com>
Co-authored-by: seanaedmiston <seane999@gmail.com>
Co-authored-by: Hasegawa Yuya <52068175+Hase-U@users.noreply.github.com>
Co-authored-by: Ivan Vendrov <ivendrov@gmail.com>
Co-authored-by: Chen Wu (吴尘) <henrychenwu@cmu.edu>
Co-authored-by: Dennis Antela Martinez <dennis.antela@gmail.com>
Co-authored-by: Maxime Vidal <max.vidal@hotmail.fr>
Co-authored-by: Rishabh Raizada <110235735+rishabh-ti@users.noreply.github.com>
2023-02-19 09:53:45 -08:00
Ankush Gola
caa8e4742e
Enable streaming for OpenAI LLM ( #986 )
...
* Support a callback `on_llm_new_token` that users can implement when
`OpenAI.streaming` is set to `True`
2023-02-14 15:06:14 -08:00
Harrison Chase
88bebb4caa
Harrison/llm integrations ( #1039 )
...
Co-authored-by: jped <jonathanped@gmail.com>
Co-authored-by: Justin Torre <justintorre75@gmail.com>
Co-authored-by: Ivan Vendrov <ivan@anthropic.com>
2023-02-13 22:06:25 -08:00
Enrico Shippole
f30dcc6359
Add GooseAI, CerebriumAI, Petals, ForefrontAI ( #981 )
...
Add GooseAI, CerebriumAI, Petals, ForefrontAI
2023-02-13 21:20:19 -08:00
Ankush Gola
bc7e56e8df
Add asyncio support for LLM (OpenAI), Chain (LLMChain, LLMMathChain), and Agent ( #841 )
...
Supporting asyncio in langchain primitives allows for users to run them
concurrently and creates more seamless integration with
asyncio-supported frameworks (FastAPI, etc.)
Summary of changes:
**LLM**
* Add `agenerate` and `_agenerate`
* Implement in OpenAI by leveraging `client.Completions.acreate`
**Chain**
* Add `arun`, `acall`, `_acall`
* Implement them in `LLMChain` and `LLMMathChain` for now
**Agent**
* Refactor and leverage async chain and llm methods
* Add ability for `Tools` to contain async coroutine
* Implement async SerpaPI `arun`
Create demo notebook.
Open questions:
* Should all the async stuff go in separate classes? I've seen both
patterns (keeping the same class and having async and sync methods vs.
having class separation)
2023-02-07 21:21:57 -08:00
Harrison Chase
bc53c928fc
Harrison/athropic ( #921 )
...
Co-authored-by: Mike Lambert <mlambert@gmail.com>
Co-authored-by: mrbean <sam@you.com>
Co-authored-by: mrbean <43734688+sam-h-bean@users.noreply.github.com>
Co-authored-by: Ivan Vendrov <ivendrov@gmail.com>
2023-02-06 22:29:25 -08:00
Harrison Chase
ba5a2f06b9
Harrison/inference endpoint ( #861 )
...
Co-authored-by: Eno Reyes <enoreyes@gmail.com>
2023-02-06 18:14:25 -08:00
Harrison Chase
4d4cff0530
Harrison/cohere experimental ( #638 )
...
Co-authored-by: inyourhead <44607279+xettrisomeman@users.noreply.github.com>
2023-01-17 22:28:55 -08:00
Harrison Chase
3474f39e21
Harrison/improve cache ( #368 )
...
make it so everything goes through generate, which removes the need for
two types of caches
2022-12-18 16:22:42 -05:00
Harrison Chase
a7084ad6e4
Harrison/version 0040 ( #366 )
2022-12-17 07:53:22 -08:00
mrbean
50257fce59
Support Streaming Tokens from OpenAI ( #364 )
...
https://github.com/hwchase17/langchain/issues/363
@hwchase17 how much does this make you want to cry?
2022-12-17 07:02:58 -08:00
mrbean
fe6695b9e7
Add HuggingFacePipeline LLM ( #353 )
...
https://github.com/hwchase17/langchain/issues/354
Add support for running your own HF pipeline locally. This would allow
you to get a lot more dynamic with what HF features and models you
support since you wouldn't be beholden to what is hosted in HF hub. You
could also do stuff with HF Optimum to quantize your models and stuff to
get pretty fast inference even running on a laptop.
2022-12-17 07:00:04 -08:00
Harrison Chase
9bb7195085
Harrison/llm saving ( #331 )
...
Co-authored-by: Akash Samant <70665700+asamant21@users.noreply.github.com>
2022-12-13 06:46:01 -08:00
Harrison Chase
3ca2c8d6c5
allow passing of stop params into openai ( #232 )
2022-11-30 22:20:13 -08:00
Harrison Chase
ae9c6257fe
Harrison/arbitrary params ( #186 )
2022-11-24 20:01:20 -08:00
Harrison Chase
9f223e6ccc
Harrison/fix lint ( #138 )
2022-11-14 08:55:59 -08:00
Delip Rao
76cecf8165
A fix for Jupyter environment variable issue ( #135 )
...
- fixes the Jupyter environment variable issues mentioned in issue #134
- fixes format/lint issues in some unrelated files (from make
format/lint)
![image](https://user-images.githubusercontent.com/347398/201599322-090af858-362d-4d69-bf59-208aea65419a.png )
2022-11-14 08:34:01 -08:00
Harrison Chase
e43534d41c
add integration with manifest ( #62 )
2022-11-10 11:24:11 -08:00
tomeras91
d8734ce5ad
Add AI21 LLMs ( #99 )
...
Integrate AI21 /complete API into langchain, to allow access to Jurassic
models.
2022-11-10 08:12:28 -08:00
Harrison Chase
b9f61390e9
add text2text generation ( #93 )
...
fixes issue #90
2022-11-08 18:08:46 -08:00
Samantha Whitmore
efbc03bda8
NLPCloud client integration ( #81 )
...
lots of kwargs! generation docs here:
https://docs.nlpcloud.com/#generation
This somewhat breaks the paradigm introduced in LLM base class as the
stop sequence isn't a list, and should rightfully be introduced at the
time of initialization of the class, along with the other kwargs that
depend on its presence (e.g. remove_end_sequence, etc.) curious if you'd
want to refactor LLM base class to take out stop as a specific named
kwarg?
2022-11-08 06:24:23 -08:00
Harrison Chase
020c42dcae
Harrison/add huggingface hub ( #23 )
...
Add support for huggingface hub
I could not find a good way to enforce stop tokens over the huggingface
hub api - that needs to hopefully be cleaned up in the future
2022-10-25 22:00:33 -07:00
Harrison Chase
d2fdcba29d
fix test name ( #22 )
2022-10-25 20:22:16 -07:00
Harrison Chase
18aeb72012
initial commit
2022-10-24 14:51:15 -07:00