Commit Graph

23 Commits

Author SHA1 Message Date
plutopulp
6d6fd1b9e1
Add PipelineAI LLM integration (#3644)
Add PipelineAI LLM integration
2023-04-27 08:22:26 -07:00
Chirag Bhatia
08478deec5
Fixed typo for HuggingFaceHub (#3612)
The current text has a typo. This PR contains the corrected spelling for
HuggingFaceHub
2023-04-26 14:33:31 -07:00
Charlie Holtz
246710def9
Fix Replicate llm response to handle iterator / multiple outputs (#3614)
One of our users noticed a bug when calling streaming models. This is
because those models return an iterator. So, I've updated the Replicate
`_call` code to join together the output. The other advantage of this
fix is that if you requested multiple outputs you would get them all –
previously I was just returning output[0].

I also adjusted the demo docs to use dolly, because we're featuring that
model right now and it's always hot, so people won't have to wait for
the model to boot up.

The error that this fixes:
```
> llm = Replicate(model=“replicate/flan-t5-xl:eec2f71c986dfa3b7a5d842d22e1130550f015720966bec48beaae059b19ef4c”)
>  llm(“hello”)
> Traceback (most recent call last):
  File "/Users/charlieholtz/workspace/dev/python/main.py", line 15, in <module>
    print(llm(prompt))
  File "/opt/homebrew/lib/python3.10/site-packages/langchain/llms/base.py", line 246, in __call__
    return self.generate([prompt], stop=stop).generations[0][0].text
  File "/opt/homebrew/lib/python3.10/site-packages/langchain/llms/base.py", line 140, in generate
    raise e
  File "/opt/homebrew/lib/python3.10/site-packages/langchain/llms/base.py", line 137, in generate
    output = self._generate(prompts, stop=stop)
  File "/opt/homebrew/lib/python3.10/site-packages/langchain/llms/base.py", line 324, in _generate
    text = self._call(prompt, stop=stop)
  File "/opt/homebrew/lib/python3.10/site-packages/langchain/llms/replicate.py", line 108, in _call
    return outputs[0]
TypeError: 'generator' object is not subscriptable
```
2023-04-26 14:26:33 -07:00
Chirag Bhatia
f174aa7712
Fix broken Cerebrium link in documentation (#3554)
The current hyperlink has a typo. This PR contains the corrected
hyperlink to Cerebrium docs
2023-04-26 08:11:58 -07:00
Harrison Chase
52d95ec47d
anthropic docs: deprecated LLM, add chat model (#3549) 2023-04-25 16:11:14 -07:00
Harrison Chase
707741de58
Harrison/prediction guard (#3490)
Co-authored-by: Daniel Whitenack <whitenack.daniel@gmail.com>
2023-04-24 22:27:22 -07:00
Zander Chase
416f3bdf11
Vwp/alpaca streaming (#3468)
Co-authored-by: Luke Stanley <306671+lukestanley@users.noreply.github.com>
2023-04-24 16:27:51 -07:00
Zander Chase
c757c3cde4
Add HuggingFace Examples (#3187)
Add a Pipeline example and add other models in th ehub notebook

To close issue
[#3077](https://github.com/hwchase17/langchain/issues/3099)
2023-04-19 17:08:10 -07:00
Jakub Kukul
599e17cea8
Working example for Anthropic (#3151)
would be great if the provided example worked out of the box 😄
2023-04-19 08:52:33 -07:00
leo-gan
c33883a40e
fixed the Cohere example title (#3053)
- fixed the Cohere example title (bug in #3041, sorry for it)
- fixed the runhouse.ipynb file name inconsistency
2023-04-17 21:02:52 -07:00
leo-gan
5420a0e404
updated langchain/docs/modules/models/llms/integrations/ notebooks (#3041)
- Updated `langchain/docs/modules/models/llms/integrations/` notebooks:
added links to the original sites, the install information, etc.
- Added the `nlpcloud` notebook.
- Removed "Example" from Titles of some notebooks, so all notebook
titles are consistent.
2023-04-17 20:25:32 -07:00
Azam Iftikhar
2a89dc8c1c
Fixing factually incorrect example (#2810)
### https://github.com/hwchase17/langchain/issues/2802
It appears that Google's Flan model may not perform as well as other
models, I used a simple example to get factually correct answer.
2023-04-13 08:42:39 -07:00
William FH
10ff1fda8e
Add Streaming for GPT4All (#2642)
- Adds  support for callback handlers in GPT4All models
- Updates notebook and docs
2023-04-09 17:54:26 -07:00
Jimmy Comfort
1dfb6a2a44
Update gpt4all example with model param (#2499)
I am pretty sure that the documentation here should point to `model`
instead of `model_path` based on the documentation here:


https://github.com/hwchase17/langchain/blob/master/langchain/llms/gpt4all.py#L26
2023-04-06 12:38:26 -07:00
Harrison Chase
1f88b11c99
replicate cleanup (#2394) 2023-04-04 12:15:03 -07:00
Harrison Chase
de7afc52a9 cr 2023-04-04 07:23:53 -07:00
Harrison Chase
c7b083ab56
bump version to 131 (#2391) 2023-04-04 07:21:50 -07:00
Harrison Chase
0a9f04bad9
Harrison/gpt4all (#2366)
Co-authored-by: William FH <13333726+hinthornw@users.noreply.github.com>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2023-04-04 06:49:17 -07:00
Harrison Chase
d85f57ef9c
Harrison/llama (#2314)
Co-authored-by: RJ Adriaansen <adriaansen@eshcc.eur.nl>
2023-04-02 14:57:45 -07:00
LaloLalo1999
632c2b49da
Fixed the link to promptlayer dashboard (#2246)
Fixed a simple error where in the PromptLayer LLM documentation, the
"PromptLayer dashboard" hyperlink linked to "https://ww.promptlayer.com"
instead of "https://www.promptlayer.com". Solved issue #2245
2023-03-31 16:16:23 -07:00
Charlie Holtz
f16c1fb6df
Add replicate take 2 (#2077)
This PR adds a replicate integration to langchain. 

It's an updated version of
https://github.com/hwchase17/langchain/pull/1993, but with updates to
match latest replicate-python code.
https://github.com/replicate/replicate-python.

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Co-authored-by: Zeke Sikelianos <zeke@sikelianos.com>
2023-03-28 11:56:57 -07:00
Michael Gokhman
b5020c7d9c
docs: fix promptlayer link typo (#2005)
tiny typo, just stumbled upon it when reading the docs

Co-authored-by: Michael Gokhman <michaelg@ai21.com>
2023-03-27 23:35:54 -07:00
Harrison Chase
705431aecc
big docs refactor (#1978)
Co-authored-by: Ankush Gola <ankush.gola@gmail.com>
2023-03-26 19:49:46 -07:00