Commit Graph

43 Commits

Author SHA1 Message Date
Xiangrui Meng
aec642febb
LLM wrapper for Databricks (#5142)
This PR adds LLM wrapper for Databricks. It supports two endpoint types:
* serving endpoint
* cluster driver proxy app

An integration notebook is included to show how it works.


Co-authored-by: Davis Chase <130488702+dev2049@users.noreply.github.com>
Co-authored-by: Gengliang Wang <gengliang@apache.org>
Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
2023-05-25 19:19:37 -07:00
Ravindra Marella
b3988621c5
Add C Transformers for GGML Models (#5218)
# Add C Transformers for GGML Models
I created Python bindings for the GGML models:
https://github.com/marella/ctransformers

Currently it supports GPT-2, GPT-J, GPT-NeoX, LLaMA, MPT, etc. See
[Supported
Models](https://github.com/marella/ctransformers#supported-models).


It provides a unified interface for all models:

```python
from langchain.llms import CTransformers

llm = CTransformers(model='/path/to/ggml-gpt-2.bin', model_type='gpt2')

print(llm('AI is going to'))
```

It can be used with models hosted on the Hugging Face Hub:

```py
llm = CTransformers(model='marella/gpt-2-ggml')
```

It supports streaming:

```py
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler

llm = CTransformers(model='marella/gpt-2-ggml', callbacks=[StreamingStdOutCallbackHandler()])
```

Please see [README](https://github.com/marella/ctransformers#readme) for
more details.
---------

Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
2023-05-25 13:42:44 -07:00
Yves Maurer
88ed8e1cd6
Added the option of specifying a proxy for the OpenAI API (#5246)
# Added the option of specifying a proxy for the OpenAI API

Fixes #5243

Co-authored-by: Yves Maurer <>
2023-05-25 09:50:25 -07:00
Harrison Chase
a775aa6389
Harrison/vertex (#5049)
Co-authored-by: Leonid Kuligin <kuligin@google.com>
Co-authored-by: Leonid Kuligin <lkuligin@yandex.ru>
Co-authored-by: sasha-gitg <44654632+sasha-gitg@users.noreply.github.com>
Co-authored-by: Justin Flick <Justinjayflick@gmail.com>
Co-authored-by: Justin Flick <jflick@homesite.com>
2023-05-24 15:51:12 -07:00
Ikko Eltociear Ashimine
fff21a0b35
Update rellm_experimental.ipynb (#5189)
# Your PR Title (What it does)

HuggingFace -> Hugging Face
2023-05-24 11:41:00 +00:00
Nolan Tremelling
faa26650c9
Beam (#4996)
# Beam

Calls the Beam API wrapper to deploy and make subsequent calls to an
instance of the gpt2 LLM in a cloud deployment. Requires installation of
the Beam library and registration of Beam Client ID and Client Secret.
Additional calls can then be made through the instance of the large
language model in your code or by calling the Beam API.

---------

Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
2023-05-24 01:25:18 -07:00
Daniel King
de6e6c764e
Add MosaicML inference endpoints (#4607)
# Add MosaicML inference endpoints
This PR adds support in langchain for MosaicML inference endpoints. We
both serve a select few open source models, and allow customers to
deploy their own models using our inference service. Docs are here
(https://docs.mosaicml.com/en/latest/inference.html), and sign up form
is here (https://forms.mosaicml.com/demo?utm_source=langchain). I'm not
intimately familiar with the details of langchain, or the contribution
process, so please let me know if there is anything that needs fixing or
this is the wrong way to submit a new integration, thanks!

I'm also not sure what the procedure is for integration tests. I have
tested locally with my api key.

## Who can review?
@hwchase17

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-05-23 15:59:08 -07:00
Matt Rickard
de6a401a22
Add OpenLM LLM multi-provider (#4993)
OpenLM is a zero-dependency OpenAI-compatible LLM provider that can call
different inference endpoints directly via HTTP. It implements the
OpenAI Completion class so that it can be used as a drop-in replacement
for the OpenAI API. This changeset utilizes BaseOpenAI for minimal added
code.

---------

Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
2023-05-22 18:09:53 -07:00
Alexey Nominas
c9e2a01875
Update GPT4ALL integration (#4567)
# Update GPT4ALL integration

GPT4ALL have completely changed their bindings. They use a bit odd
implementation that doesn't fit well into base.py and it will probably
be changed again, so it's a temporary solution.

Fixes #3839, #4628
2023-05-18 09:38:54 -07:00
Leonid Ganeline
b96ab4b763
docs retriever improvements (#4430)
# Docs: improvements in the `retrievers/examples/` notebooks

Its primary purpose is to make the Jupyter notebook examples
**consistent** and more suitable for first-time viewers.
- add links to the integration source (if applicable) with a short
description of this source;
- removed `_retriever` suffix from the file names (where it existed) for
consistency;
- removed ` retriever` from the notebook title (where it existed) for
consistency;
- added code to install necessary Python package(s);
- added code to set up the necessary API Key.
- very small fixes in notebooks from other folders (for consistency):
  - docs/modules/indexes/vectorstores/examples/elasticsearch.ipynb
  - docs/modules/indexes/vectorstores/examples/pinecone.ipynb
  - docs/modules/models/llms/integrations/cohere.ipynb
- fixed misspelling in langchain/retrievers/time_weighted_retriever.py
comment (sorry, about this change in a .py file )

## Who can review
@dev2049
2023-05-17 15:29:22 -07:00
Sean Morgan
5372a06a8c
DOC: Fix SageMaker example (#4598)
# Fix SageMaker example typing

Since https://github.com/hwchase17/langchain/pull/3249 a new type
`LLMContentHandler` is enforced for SageMaker Endpoints

Fixes #4168
2023-05-16 17:28:16 -07:00
Zander Chase
d85b04be7f
Add RELLM and JSONFormer experimental LLM decoding (#4185)
[RELLM](https://github.com/r2d4/rellm) is a library that wraps local
HuggingFace pipeline models for structured decoding.

RELLM works by generating tokens one at a time. At each step, it masks
tokens that don't conform to the provided partial regular expression.

[JSONFormer](https://github.com/1rgs/jsonformer) is a bit different, where it sequentially adds the keys then decodes each value directly
2023-05-14 22:40:03 +00:00
Sai Vinay G
cf4c1394a2
feat: Added class to support huggingface text generation inference server (#4447)
[Text Generation
Inference](https://github.com/huggingface/text-generation-inference) is
a Rust, Python and gRPC server for generating text using LLMs.

This pull request add support for self hosted Text Generation Inference
servers.

feature: #4280

---------

Co-authored-by: Your Name <you@example.com>
Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
2023-05-12 07:32:37 -07:00
kYLe
446b60d803
Fix a typo in langchain/docs/modules/models/llms/integrations/anyscale.ipynb (#4526) 2023-05-11 09:03:04 -07:00
kYLe
0d51a1f12b
Add LLMs support for Anyscale Service (#4350)
Add Anyscale service integration under LLM

Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
2023-05-11 00:39:59 -07:00
PawelFaron
04b74d0446
Adjusted GPT4All llm to streaming API and added support for GPT4All_J (#4131)
Fix for these issues:
https://github.com/hwchase17/langchain/issues/4126

https://github.com/hwchase17/langchain/issues/3839#issuecomment-1534258559

---------

Co-authored-by: Pawel Faron <ext-pawel.faron@vaisala.com>
2023-05-06 15:14:09 -07:00
Harrison Chase
64940e9d0f
docs for azure (#4238) 2023-05-06 10:16:00 -07:00
Davis Chase
df3bc707fc
Dev2049/callback example fix (#4010)
Closes #3997

---------

Co-authored-by: Akshaj Jain <akshaj.jain@gmail.com>
2023-05-02 16:20:16 -07:00
Zander Chase
c4cb55a0c5
[Breaking] Migrate GPT4All to use PyGPT4All (#3934)
Seems the pyllamacpp package is no longer the supported bindings from
gpt4all. Tested that this works locally.

Given that the older models weren't very performant, I think it's better
to migrate now without trying to include a lot of try / except blocks

---------

Co-authored-by: Nissan Pow <npow@users.noreply.github.com>
Co-authored-by: Nissan Pow <pownissa@amazon.com>
2023-05-01 20:42:45 -07:00
Ankush Gola
d3ec00b566
Callbacks Refactor [base] (#3256)
Co-authored-by: Nuno Campos <nuno@boringbits.io>
Co-authored-by: Davis Chase <130488702+dev2049@users.noreply.github.com>
Co-authored-by: Zander Chase <130414180+vowelparrot@users.noreply.github.com>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-04-30 11:14:09 -07:00
plutopulp
6d6fd1b9e1
Add PipelineAI LLM integration (#3644)
Add PipelineAI LLM integration
2023-04-27 08:22:26 -07:00
Chirag Bhatia
08478deec5
Fixed typo for HuggingFaceHub (#3612)
The current text has a typo. This PR contains the corrected spelling for
HuggingFaceHub
2023-04-26 14:33:31 -07:00
Charlie Holtz
246710def9
Fix Replicate llm response to handle iterator / multiple outputs (#3614)
One of our users noticed a bug when calling streaming models. This is
because those models return an iterator. So, I've updated the Replicate
`_call` code to join together the output. The other advantage of this
fix is that if you requested multiple outputs you would get them all –
previously I was just returning output[0].

I also adjusted the demo docs to use dolly, because we're featuring that
model right now and it's always hot, so people won't have to wait for
the model to boot up.

The error that this fixes:
```
> llm = Replicate(model=“replicate/flan-t5-xl:eec2f71c986dfa3b7a5d842d22e1130550f015720966bec48beaae059b19ef4c”)
>  llm(“hello”)
> Traceback (most recent call last):
  File "/Users/charlieholtz/workspace/dev/python/main.py", line 15, in <module>
    print(llm(prompt))
  File "/opt/homebrew/lib/python3.10/site-packages/langchain/llms/base.py", line 246, in __call__
    return self.generate([prompt], stop=stop).generations[0][0].text
  File "/opt/homebrew/lib/python3.10/site-packages/langchain/llms/base.py", line 140, in generate
    raise e
  File "/opt/homebrew/lib/python3.10/site-packages/langchain/llms/base.py", line 137, in generate
    output = self._generate(prompts, stop=stop)
  File "/opt/homebrew/lib/python3.10/site-packages/langchain/llms/base.py", line 324, in _generate
    text = self._call(prompt, stop=stop)
  File "/opt/homebrew/lib/python3.10/site-packages/langchain/llms/replicate.py", line 108, in _call
    return outputs[0]
TypeError: 'generator' object is not subscriptable
```
2023-04-26 14:26:33 -07:00
Chirag Bhatia
f174aa7712
Fix broken Cerebrium link in documentation (#3554)
The current hyperlink has a typo. This PR contains the corrected
hyperlink to Cerebrium docs
2023-04-26 08:11:58 -07:00
Harrison Chase
52d95ec47d
anthropic docs: deprecated LLM, add chat model (#3549) 2023-04-25 16:11:14 -07:00
Harrison Chase
707741de58
Harrison/prediction guard (#3490)
Co-authored-by: Daniel Whitenack <whitenack.daniel@gmail.com>
2023-04-24 22:27:22 -07:00
Zander Chase
416f3bdf11
Vwp/alpaca streaming (#3468)
Co-authored-by: Luke Stanley <306671+lukestanley@users.noreply.github.com>
2023-04-24 16:27:51 -07:00
Zander Chase
c757c3cde4
Add HuggingFace Examples (#3187)
Add a Pipeline example and add other models in th ehub notebook

To close issue
[#3077](https://github.com/hwchase17/langchain/issues/3099)
2023-04-19 17:08:10 -07:00
Jakub Kukul
599e17cea8
Working example for Anthropic (#3151)
would be great if the provided example worked out of the box 😄
2023-04-19 08:52:33 -07:00
leo-gan
c33883a40e
fixed the Cohere example title (#3053)
- fixed the Cohere example title (bug in #3041, sorry for it)
- fixed the runhouse.ipynb file name inconsistency
2023-04-17 21:02:52 -07:00
leo-gan
5420a0e404
updated langchain/docs/modules/models/llms/integrations/ notebooks (#3041)
- Updated `langchain/docs/modules/models/llms/integrations/` notebooks:
added links to the original sites, the install information, etc.
- Added the `nlpcloud` notebook.
- Removed "Example" from Titles of some notebooks, so all notebook
titles are consistent.
2023-04-17 20:25:32 -07:00
Azam Iftikhar
2a89dc8c1c
Fixing factually incorrect example (#2810)
### https://github.com/hwchase17/langchain/issues/2802
It appears that Google's Flan model may not perform as well as other
models, I used a simple example to get factually correct answer.
2023-04-13 08:42:39 -07:00
William FH
10ff1fda8e
Add Streaming for GPT4All (#2642)
- Adds  support for callback handlers in GPT4All models
- Updates notebook and docs
2023-04-09 17:54:26 -07:00
Jimmy Comfort
1dfb6a2a44
Update gpt4all example with model param (#2499)
I am pretty sure that the documentation here should point to `model`
instead of `model_path` based on the documentation here:


https://github.com/hwchase17/langchain/blob/master/langchain/llms/gpt4all.py#L26
2023-04-06 12:38:26 -07:00
Harrison Chase
1f88b11c99
replicate cleanup (#2394) 2023-04-04 12:15:03 -07:00
Harrison Chase
de7afc52a9 cr 2023-04-04 07:23:53 -07:00
Harrison Chase
c7b083ab56
bump version to 131 (#2391) 2023-04-04 07:21:50 -07:00
Harrison Chase
0a9f04bad9
Harrison/gpt4all (#2366)
Co-authored-by: William FH <13333726+hinthornw@users.noreply.github.com>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2023-04-04 06:49:17 -07:00
Harrison Chase
d85f57ef9c
Harrison/llama (#2314)
Co-authored-by: RJ Adriaansen <adriaansen@eshcc.eur.nl>
2023-04-02 14:57:45 -07:00
LaloLalo1999
632c2b49da
Fixed the link to promptlayer dashboard (#2246)
Fixed a simple error where in the PromptLayer LLM documentation, the
"PromptLayer dashboard" hyperlink linked to "https://ww.promptlayer.com"
instead of "https://www.promptlayer.com". Solved issue #2245
2023-03-31 16:16:23 -07:00
Charlie Holtz
f16c1fb6df
Add replicate take 2 (#2077)
This PR adds a replicate integration to langchain. 

It's an updated version of
https://github.com/hwchase17/langchain/pull/1993, but with updates to
match latest replicate-python code.
https://github.com/replicate/replicate-python.

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Co-authored-by: Zeke Sikelianos <zeke@sikelianos.com>
2023-03-28 11:56:57 -07:00
Michael Gokhman
b5020c7d9c
docs: fix promptlayer link typo (#2005)
tiny typo, just stumbled upon it when reading the docs

Co-authored-by: Michael Gokhman <michaelg@ai21.com>
2023-03-27 23:35:54 -07:00
Harrison Chase
705431aecc
big docs refactor (#1978)
Co-authored-by: Ankush Gola <ankush.gola@gmail.com>
2023-03-26 19:49:46 -07:00