Commit Graph

65 Commits

Author SHA1 Message Date
Harrison Chase
5442d2b1fa
Harrison/stop importing from init (#10690) 2023-09-16 17:22:48 -07:00
Ikko Eltociear Ashimine
f6e5632c84
Fix typo in google_vertex_ai_palm.ipynb (#10631)
seperate -> separate
2023-09-15 12:54:06 -07:00
Bagatur
2ae568dcf5
Separate platforms integrations docs (#10609) 2023-09-15 12:18:57 -07:00
Jeffrey Morgan
6d3670c7d8
Use OllamaEmbeddings in ollama examples (#10616)
This change the Ollama examples to use `OllamaEmbeddings` for generating
embeddings.
2023-09-15 10:05:27 -07:00
Ackermann Yuriy
5e50b89164
Added embeddings support for ollama (#10124)
- Description: Added support for Ollama embeddings
  - Issue: the issue # it fixes (if applicable),
  - Dependencies: N/A
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
  - Twitter handle: @herrjemand

cc  https://github.com/jmorganca/ollama/issues/436
2023-09-14 17:42:39 -07:00
Bagatur
48a4efc51a
Bagatur/update replicate nb (#10605) 2023-09-14 15:21:42 -07:00
Bagatur
77a165e0d9
fix replicate output type (#10598) 2023-09-14 14:02:01 -07:00
stonekim
adabdfdfc7
Add Baidu Qianfan endpoint for LLM (#10496)
- Description:
* Baidu AI Cloud's [Qianfan
Platform](https://cloud.baidu.com/doc/WENXINWORKSHOP/index.html) is an
all-in-one platform for large model development and service deployment,
catering to enterprise developers in China. Qianfan Platform offers a
wide range of resources, including the Wenxin Yiyan model (ERNIE-Bot)
and various third-party open-source models.
- Issue: none
- Dependencies: 
    * qianfan
- Tag maintainer: @baskaryan
- Twitter handle:

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-09-13 16:23:49 -07:00
Rajesh Kumar
737b75d278
Latest version of HazyResearch/manifest doesn't support accessing "client" directly (#10389)
**Description:** 
The latest version of HazyResearch/manifest doesn't support accessing
the "client" directly. The latest version supports connection pools and
a client has to be requested from the client pool.
**Issue:**
No matching issue was found
**Dependencies:** 
The manifest.ipynb file in docs/extras/integrations/llms need to be
updated
**Twitter handle:** 
@hrk_cbe
2023-09-11 14:22:53 -07:00
eryk-dsai
675d57df50
New LLM integration: Ctranslate2 (#10400)
## Description:

I've integrated CTranslate2 with LangChain. CTranlate2 is a recently
popular library for efficient inference with Transformer models that
compares favorably to alternatives such as HF Text Generation Inference
and vLLM in
[benchmarks](https://hamel.dev/notes/llm/inference/03_inference.html).
2023-09-09 13:19:00 -07:00
Nik
49341483da
Update Banana.dev docs to latest correct usage (#10183)
- Description: this PR updates all Banana.dev-related docs to match the
latest client usage. The code in the docs before this PR were out of
date and would never run.
- Issue: [#6404](https://github.com/langchain-ai/langchain/issues/6404)
- Dependencies: -
- Tag maintainer:  
- Twitter handle: [BananaDev_ ](https://twitter.com/BananaDev_ )
2023-09-06 07:46:17 -07:00
Bagatur
a94dc6ee44
model garden nit (#10194) 2023-09-04 11:42:35 -07:00
Leonid Ganeline
a52fe9528e
docs: fixed title in Bittensor example (#9893)
Fixed title in the `Bittensor` example. The old title brakes the sorted
order of items in the navbar.
Added some formatting.
2023-09-03 15:10:42 -07:00
Blake (Yung Cher Ho)
f4bed8a04c
Takeoff baseurl support (#10091)
## Description
This PR introduces a minor change to the TitanTakeoff integration. 
Instead of specifying a port on localhost, this PR will allow users to
specify a baseURL instead. This will allow users to use the integration
if they have TitanTakeoff deployed externally (not on localhost). This
removes the hardcoded reference to localhost "http://localhost:{port}".

### Info about Titan Takeoff
Titan Takeoff is an inference server created by
[TitanML](https://www.titanml.co/) that allows you to deploy large
language models locally on your hardware in a single command. Most
generative model architectures are included, such as Falcon, Llama 2,
GPT2, T5 and many more.

Read more about Titan Takeoff here:
-
[Blog](https://medium.com/@TitanML/introducing-titan-takeoff-6c30e55a8e1e)
- [Docs](https://docs.titanml.co/docs/titan-takeoff/getting-started)

### Dependencies
No new dependencies are introduced. However, users will need to install
the titan-iris package in their local environment and start the Titan
Takeoff inferencing server in order to use the Titan Takeoff
integration.

Thanks for your help and please let me know if you have any questions.
cc: @hwchase17 @baskaryan

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-09-03 14:45:59 -07:00
Leonid Kuligin
30239b3025
added support for inference from Model Garden (#9367)
#8850

---------

Co-authored-by: Leonid Kuligin <kuligin@google.com>
2023-09-01 15:58:21 -07:00
Leonid Ganeline
54a8df87b9
📖 docs: fixed integration/llms navbar (#9277)
Fixed navbar:
- renamed several files, so ToC is sorted correctly
- made ToC items consistent: formatted several Titles
- added several links
- reformatted several docs to a consistent format
- renamed several files (removed `_example` suffix)
- added renamed files to the `docs/docs_skeleton/vercel.json`
2023-09-01 15:30:37 -07:00
Bagatur
b485c3048b
rm base64 images from docs (#10110)
Causing problems indexing docs and notebook images don't render after markdown conversion anyways
2023-09-01 15:15:12 -07:00
KyrianC
491089754d
EdenAI LLM update. Add models name option (#8963)
This PR follows the **Eden AI (LLM + embeddings) integration**. #8633 

We added an optional parameter to choose different AI models for
providers (like 'text-bison' for provider 'google', 'text-davinci-003'
for provider 'openai', etc.).

Usage:

```python
llm = EdenAI(
    feature="text",
    provider="google",
    params={
        "model": "text-bison",  # new
        "temperature": 0.2,
        "max_tokens": 250,
    },
)

```

You can also change the provider + model after initialization
```python
llm = EdenAI(
    feature="text",
    provider="google",
    params={
        "temperature": 0.2,
        "max_tokens": 250,
    },
)

prompt = """
hi 
"""

llm(prompt, providers='openai', model='text-davinci-003')  # change provider & model
```

The jupyter notebook as been updated with an example well.


Ping: @hwchase17, @baskaryan

---------

Co-authored-by: RedhaWassim <rwasssim@gmail.com>
Co-authored-by: sam <melaine.samy@gmail.com>
2023-09-01 12:11:33 -07:00
Zizhong Zhang
641b71e2cd
refactor: rename to OpaquePrompts (#10013)
Renamed to OpaquePrompts

cc @baskaryan Thanks in advance!
2023-08-31 12:21:24 -07:00
Bagatur
8f199239b8
docs: llms/google vertex AI example update (#9960)
Updated title, description, added sections.
2023-08-29 15:07:18 -07:00
leo-gan
210de0c66b Updated title, description, added sections 2023-08-29 14:31:33 -07:00
Cameron Hutchison
bcc3463ff4
docs: Azure AD Authentication for Azure OpenAI (#9951)
# Description
This PR adds additional documentation on how to use Azure Active
Directory to authenticate to an OpenAI service within Azure. This method
of authentication allows organizations with more complex security
requirements to use Azure OpenAI.

# Issue
N/A

# Dependencies
N/A

# Twitter
https://twitter.com/CamAHutchison
2023-08-29 14:29:27 -07:00
Leonid Ganeline
b1bffea9c7
docs: fix for title of llm_caching nb (#9891)
Fixed title for the `extras/integrations/llms/llm_caching.ipynb`.
Existing title breaks the sorted order of items in the navbar.
Updated some formatting.
2023-08-28 18:34:04 -07:00
Lance Martin
8393ba9dab
Add instructions for GGUF (#9874)
llama.cpp migrated to GGUF model format, and new releases (e.g.,
[here](https://huggingface.co/TheBloke)) now use GGUF.
2023-08-28 12:56:46 -07:00
eryk-dsai
7f5713b80a
feat: grammar-based sampling in llama-cpp (#9712)
## Description 

The following PR enables the [grammar-based
sampling](https://github.com/ggerganov/llama.cpp/tree/master/grammars)
in llama-cpp LLM.

In short, loading file with formal grammar definition will constrain
model outputs. For instance, one can force the model to generate valid
JSON or generate only python lists.

In the follow-up PR we will add:
* docs with some description why it is cool and how it works
* maybe some code sample for some task such as in llama repo

---------

Co-authored-by: Lance Martin <lance@langchain.dev>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-08-28 09:52:55 -07:00
Margaret Qian
30151c99c7
Update Mosaic endpoint input/output api (#7391)
As noted in prior PRs (https://github.com/hwchase17/langchain/pull/6060,
https://github.com/hwchase17/langchain/pull/7348), the input/output
format has changed a few times as we've stabilized our inference API.
This PR updates the API to the latest stable version as indicated in our
docs: https://docs.mosaicml.com/en/latest/inference.html

The input format looks like this:

`{"inputs": [<prompt>]}
`

The output format looks like this:
`
{"outputs": [<output_text>]}
`
---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-08-24 22:13:17 -07:00
Lakshay Kansal
a8c916955f
Updates to Nomic Atlas and GPT4All documentation (#9414)
Description: Updates for Nomic AI Atlas and GPT4All integrations
documentation.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-08-23 17:49:44 -07:00
Zizhong Zhang
8a03836160
docs: fix PromptGuard docs (#9659)
Fix PromptGuard docs. Noticed several trivial issues on the docs when
integrating the new class.
cc @baskaryan
2023-08-23 10:04:53 -07:00
Zizhong Zhang
00eff8c4a7
feat: Add PromptGuard integration (#9481)
Add PromptGuard integration
-------
There are two approaches to integrate PromptGuard with a LangChain
application.

1. PromptGuardLLMWrapper
2. functions that can be used in LangChain expression.

-----
- Dependencies
`promptguard` python package, which is a runtime requirement if you'd
try out the demo.

- @baskaryan @hwchase17 Thanks for the ideas and suggestions along the
development process.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-08-21 14:59:36 -07:00
Utku Ege Tuluk
bb4f7936f9
feat(llms): add streaming support to textgen (#9295)
- Description: Added streaming support to the textgen component in the
llms module.
  - Dependencies: websocket-client = "^1.6.1"
2023-08-21 07:39:14 -07:00
Leonid Ganeline
fdbeb52756
Qwen model example (#9516)
added an example for `Qwen-7B` model on `HugginfFaceHub` 🤗
2023-08-20 17:21:45 -07:00
Asif Ahmad
08feed3332
Changed the NIBittensorLLM API URL to the correct one (#9419)
Changed https://api.neuralinterent.ai/ to https://api.neuralinternet.ai/
which is the valid URL for the API of NIBittensorLLM.
2023-08-20 16:25:19 -07:00
bsenst
a956b69720
fix typo in huggingface_hub.ipynb (#9499) 2023-08-19 14:50:05 -07:00
Leonid Ganeline
99e5eaa9b1
InternLM example (#9465)
Added `InternML` model example to the HubbingFace Hub notebook
2023-08-18 11:17:17 -07:00
William FH
d4f790fd40
Fix imports in notebook (#9458) 2023-08-18 10:08:47 -07:00
Angel Luis
2e8733cf54
Fix typo in huggingface_textgen_inference.ipynb (#9313)
Replaced incorrect `stream` parameter by `streaming` on Integrations
docs.
2023-08-16 16:22:21 -07:00
Kunj-2206
1b3942ba74
Added BittensorLLM (#9250)
Description: Adding NIBittensorLLM via Validator Endpoint to langchain
llms
Tag maintainer: @Kunj-2206

Maintainer responsibilities:
    Models / Prompts: @hwchase17, @baskaryan

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-08-15 15:40:52 -07:00
Toshish Jawale
852722ea45
Improvements in Nebula LLM (#9226)
- Description: Added improvements in Nebula LLM to perform auto-retry;
more generation parameters supported. Conversation is no longer required
to be passed in the LLM object. Examples are updated.
  - Issue: N/A
  - Dependencies: N/A
  - Tag maintainer: @baskaryan 
  - Twitter handle: symbldotai

---------

Co-authored-by: toshishjawale <toshish@symbl.ai>
2023-08-15 15:33:07 -07:00
fanyou-wbd
5e43768f61
docs: update LlamaCpp max_tokens args (#9238)
This PR updates documentations only, `max_length` should be `max_tokens`
according to latest LlamaCpp API doc:
https://api.python.langchain.com/en/latest/llms/langchain.llms.llamacpp.LlamaCpp.html
2023-08-15 00:50:20 -07:00
Lance Martin
17ae2998e7
Update Ollama docs (#9220)
Based on discussion w/ team.
2023-08-14 13:56:16 -07:00
Emmanuel Gautier
f11e5442d6
docs: update LlamaCpp input args (#9173)
This PR only updates the LlamaCpp args documentation. The input arg has
been flattened.
2023-08-14 07:42:03 -07:00
Massimiliano Pronesti
d95eeaedbe
feat(llms): support vLLM's OpenAI-compatible server (#9179)
This PR aims at supporting [vLLM's OpenAI-compatible server
feature](https://vllm.readthedocs.io/en/latest/getting_started/quickstart.html#openai-compatible-server),
i.e. allowing to call vLLM's LLMs like if they were OpenAI's.

I've also udpated the related notebook providing an example usage. At
the moment, vLLM only supports the `Completion` API.
2023-08-13 23:03:05 -07:00
Michael Goin
621da3c164
Adds DeepSparse as an LLM (#9184)
Adds [DeepSparse](https://github.com/neuralmagic/deepsparse) as an LLM
backend. DeepSparse supports running various open-source sparsified
models hosted on [SparseZoo](https://sparsezoo.neuralmagic.com/) for
performance gains on CPUs.

Twitter handles: @mgoin_ @neuralmagic


---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-08-13 22:35:58 -07:00
Chenyu Zhao
c0acbdca1b
Update Fireworks model names (#9085) 2023-08-10 19:23:42 -07:00
Piyush Jain
8eea46ed0e
Bedrock embeddings async methods (#9024)
## Description
This PR adds the `aembed_query` and `aembed_documents` async methods for
improving the embeddings generation for large documents. The
implementation uses asyncio tasks and gather to achieve concurrency as
there is no bedrock async API in boto3.

### Maintainers
@agola11 
@aarora79  

### Open questions
To avoid throttling from the Bedrock API, should there be an option to
limit the concurrency of the calls?
2023-08-10 14:21:03 -07:00
Blake (Yung Cher Ho)
8d351bfc20
Takeoff integration (#9045)
## Description:
This PR adds the Titan Takeoff Server to the available LLMs in
LangChain.

Titan Takeoff is an inference server created by
[TitanML](https://www.titanml.co/) that allows you to deploy large
language models locally on your hardware in a single command. Most
generative model architectures are included, such as Falcon, Llama 2,
GPT2, T5 and many more.

Read more about Titan Takeoff here:
-
[Blog](https://medium.com/@TitanML/introducing-titan-takeoff-6c30e55a8e1e)
- [Docs](https://docs.titanml.co/docs/titan-takeoff/getting-started)

#### Testing
As Titan Takeoff runs locally on port 8000 by default, no network access
is needed. Responses are mocked for testing.

- [x] Make Lint
- [x] Make Format
- [x] Make Test

#### Dependencies
No new dependencies are introduced. However, users will need to install
the titan-iris package in their local environment and start the Titan
Takeoff inferencing server in order to use the Titan Takeoff
integration.

Thanks for your help and please let me know if you have any questions.

cc: @hwchase17 @baskaryan
2023-08-10 10:56:06 -07:00
David vonThenen
bf4a112aa6
Fixes to the Nebula LLM Integration (#8918)
This addresses some issues with introducing the Nebula LLM to LangChain
in this PR:
https://github.com/langchain-ai/langchain/pull/8876

This fixes the following:
- Removes `SYMBLAI` from variable names
- Fixes bug with `Bearer` for the API KEY


Thanks again in advance for your help!
cc: @hwchase17, @baskaryan

---------

Co-authored-by: dvonthenen <david.vonthenen@gmail.com>
2023-08-08 10:04:43 -07:00
Jacob Lee
fa30a57034
Adds Ollama as an LLM (#8829)
Adds Ollama as an LLM. Ollama can run various open source models locally
e.g. Llama 2 and Vicuna, automatically configuring and GPU-optimizing
them.

@rlancemartin @hwchase17

---------

Co-authored-by: Lance Martin <lance@langchain.dev>
2023-08-07 21:19:22 -07:00
David vonThenen
40079d4936
Introduce Nebula LLM to LangChain (#8876)
## Description

This PR adds Nebula to the available LLMs in LangChain.

Nebula is an LLM focused on conversation understanding and enables users
to extract conversation insights from video, audio, text, and chat-based
conversations. These conversations can occur between any mix of human or
AI participants.

Examples of some questions you could ask Nebula from a given
conversation are:
- What could be the customer’s pain points based on the conversation?
- What sales opportunities can be identified from this conversation?
- What best practices can be derived from this conversation for future
customer interactions?

You can read more about Nebula here:

https://symbl.ai/blog/extract-insights-symbl-ai-generative-ai-recall-ai-meetings/

#### Integration Test 

An integration test is added, but it requires network access. Since
Nebula is fully managed like OpenAI, network access is required to
exercise the integration test.

#### Linting

- [x] make lint
- [x] make test (TODO: there seems to be a failure in another
non-related test??? Need to check on this.)
- [x] make format

### Dependencies

No new dependencies were introduced.

### Twitter handle

[@symbldotai](https://twitter.com/symbldotai)
[@dvonthenen](https://twitter.com/dvonthenen)


If you have any questions, please let me know.

cc: @hwchase17, @baskaryan

---------

Co-authored-by: dvonthenen <david.vonthenen@gmail.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-08-07 13:15:26 -07:00
Massimiliano Pronesti
a616e19975
feat(llms): add support for vLLM (#8806)
Hello langchain maintainers, 
this PR aims at integrating
[vllm](https://vllm.readthedocs.io/en/latest/#) into langchain. This PR
closes #8729.

This feature clearly depends on `vllm`, but I've seen other models
supported here depend on packages that are not included in the
pyproject.toml (e.g. `gpt4all`, `text-generation`) so I thought it was
the case for this as well.

@hwchase17, @baskaryan

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-08-07 07:32:02 -07:00