Commit Graph

29 Commits

Author SHA1 Message Date
Kunj-2206
1b3942ba74
Added BittensorLLM (#9250)
Description: Adding NIBittensorLLM via Validator Endpoint to langchain
llms
Tag maintainer: @Kunj-2206

Maintainer responsibilities:
    Models / Prompts: @hwchase17, @baskaryan

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-08-15 15:40:52 -07:00
Toshish Jawale
852722ea45
Improvements in Nebula LLM (#9226)
- Description: Added improvements in Nebula LLM to perform auto-retry;
more generation parameters supported. Conversation is no longer required
to be passed in the LLM object. Examples are updated.
  - Issue: N/A
  - Dependencies: N/A
  - Tag maintainer: @baskaryan 
  - Twitter handle: symbldotai

---------

Co-authored-by: toshishjawale <toshish@symbl.ai>
2023-08-15 15:33:07 -07:00
fanyou-wbd
5e43768f61
docs: update LlamaCpp max_tokens args (#9238)
This PR updates documentations only, `max_length` should be `max_tokens`
according to latest LlamaCpp API doc:
https://api.python.langchain.com/en/latest/llms/langchain.llms.llamacpp.LlamaCpp.html
2023-08-15 00:50:20 -07:00
Lance Martin
17ae2998e7
Update Ollama docs (#9220)
Based on discussion w/ team.
2023-08-14 13:56:16 -07:00
Emmanuel Gautier
f11e5442d6
docs: update LlamaCpp input args (#9173)
This PR only updates the LlamaCpp args documentation. The input arg has
been flattened.
2023-08-14 07:42:03 -07:00
Massimiliano Pronesti
d95eeaedbe
feat(llms): support vLLM's OpenAI-compatible server (#9179)
This PR aims at supporting [vLLM's OpenAI-compatible server
feature](https://vllm.readthedocs.io/en/latest/getting_started/quickstart.html#openai-compatible-server),
i.e. allowing to call vLLM's LLMs like if they were OpenAI's.

I've also udpated the related notebook providing an example usage. At
the moment, vLLM only supports the `Completion` API.
2023-08-13 23:03:05 -07:00
Michael Goin
621da3c164
Adds DeepSparse as an LLM (#9184)
Adds [DeepSparse](https://github.com/neuralmagic/deepsparse) as an LLM
backend. DeepSparse supports running various open-source sparsified
models hosted on [SparseZoo](https://sparsezoo.neuralmagic.com/) for
performance gains on CPUs.

Twitter handles: @mgoin_ @neuralmagic


---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-08-13 22:35:58 -07:00
Chenyu Zhao
c0acbdca1b
Update Fireworks model names (#9085) 2023-08-10 19:23:42 -07:00
Piyush Jain
8eea46ed0e
Bedrock embeddings async methods (#9024)
## Description
This PR adds the `aembed_query` and `aembed_documents` async methods for
improving the embeddings generation for large documents. The
implementation uses asyncio tasks and gather to achieve concurrency as
there is no bedrock async API in boto3.

### Maintainers
@agola11 
@aarora79  

### Open questions
To avoid throttling from the Bedrock API, should there be an option to
limit the concurrency of the calls?
2023-08-10 14:21:03 -07:00
Blake (Yung Cher Ho)
8d351bfc20
Takeoff integration (#9045)
## Description:
This PR adds the Titan Takeoff Server to the available LLMs in
LangChain.

Titan Takeoff is an inference server created by
[TitanML](https://www.titanml.co/) that allows you to deploy large
language models locally on your hardware in a single command. Most
generative model architectures are included, such as Falcon, Llama 2,
GPT2, T5 and many more.

Read more about Titan Takeoff here:
-
[Blog](https://medium.com/@TitanML/introducing-titan-takeoff-6c30e55a8e1e)
- [Docs](https://docs.titanml.co/docs/titan-takeoff/getting-started)

#### Testing
As Titan Takeoff runs locally on port 8000 by default, no network access
is needed. Responses are mocked for testing.

- [x] Make Lint
- [x] Make Format
- [x] Make Test

#### Dependencies
No new dependencies are introduced. However, users will need to install
the titan-iris package in their local environment and start the Titan
Takeoff inferencing server in order to use the Titan Takeoff
integration.

Thanks for your help and please let me know if you have any questions.

cc: @hwchase17 @baskaryan
2023-08-10 10:56:06 -07:00
David vonThenen
bf4a112aa6
Fixes to the Nebula LLM Integration (#8918)
This addresses some issues with introducing the Nebula LLM to LangChain
in this PR:
https://github.com/langchain-ai/langchain/pull/8876

This fixes the following:
- Removes `SYMBLAI` from variable names
- Fixes bug with `Bearer` for the API KEY


Thanks again in advance for your help!
cc: @hwchase17, @baskaryan

---------

Co-authored-by: dvonthenen <david.vonthenen@gmail.com>
2023-08-08 10:04:43 -07:00
Jacob Lee
fa30a57034
Adds Ollama as an LLM (#8829)
Adds Ollama as an LLM. Ollama can run various open source models locally
e.g. Llama 2 and Vicuna, automatically configuring and GPU-optimizing
them.

@rlancemartin @hwchase17

---------

Co-authored-by: Lance Martin <lance@langchain.dev>
2023-08-07 21:19:22 -07:00
David vonThenen
40079d4936
Introduce Nebula LLM to LangChain (#8876)
## Description

This PR adds Nebula to the available LLMs in LangChain.

Nebula is an LLM focused on conversation understanding and enables users
to extract conversation insights from video, audio, text, and chat-based
conversations. These conversations can occur between any mix of human or
AI participants.

Examples of some questions you could ask Nebula from a given
conversation are:
- What could be the customer’s pain points based on the conversation?
- What sales opportunities can be identified from this conversation?
- What best practices can be derived from this conversation for future
customer interactions?

You can read more about Nebula here:

https://symbl.ai/blog/extract-insights-symbl-ai-generative-ai-recall-ai-meetings/

#### Integration Test 

An integration test is added, but it requires network access. Since
Nebula is fully managed like OpenAI, network access is required to
exercise the integration test.

#### Linting

- [x] make lint
- [x] make test (TODO: there seems to be a failure in another
non-related test??? Need to check on this.)
- [x] make format

### Dependencies

No new dependencies were introduced.

### Twitter handle

[@symbldotai](https://twitter.com/symbldotai)
[@dvonthenen](https://twitter.com/dvonthenen)


If you have any questions, please let me know.

cc: @hwchase17, @baskaryan

---------

Co-authored-by: dvonthenen <david.vonthenen@gmail.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-08-07 13:15:26 -07:00
Massimiliano Pronesti
a616e19975
feat(llms): add support for vLLM (#8806)
Hello langchain maintainers, 
this PR aims at integrating
[vllm](https://vllm.readthedocs.io/en/latest/#) into langchain. This PR
closes #8729.

This feature clearly depends on `vllm`, but I've seen other models
supported here depend on packages that are not included in the
pyproject.toml (e.g. `gpt4all`, `text-generation`) so I thought it was
the case for this as well.

@hwchase17, @baskaryan

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-08-07 07:32:02 -07:00
Dayou Liu
91a0817e39
docs: llamacpp minor fixes (#8738)
- Description: minor updates on llama cpp doc
2023-08-04 14:19:43 -07:00
rjanardhan3
affaaea87b
Updates fireworks (#8765)
<!-- Thank you for contributing to LangChain!

Replace this comment with:
  - Description: Updates to Fireworks Documentation, 
  - Issue: N/A,
  - Dependencies: N/A,
  - Tag maintainer: @rlancemartin,

---------

Co-authored-by: Raj Janardhan <rajjanardhan@Rajs-Laptop.attlocal.net>
2023-08-04 10:32:22 -07:00
Bagatur
b2b71b0d35
Bagatur/eden llm (#8670)
Co-authored-by: RedhaWassim <rwasssim@gmail.com>
Co-authored-by: KyrianC <ckyrian@protonmail.com>
Co-authored-by: sam <melaine.samy@gmail.com>
2023-08-03 10:24:51 -07:00
rjanardhan3
68113348cc
Fireworks integration (#8322)
Description - Integrates Fireworks within Langchain LLMs to allow users
to use Fireworks models with Langchain, mainly for summarization.

Issue - Not applicable
Dependencies - None
Tag maintainer - @rlancemartin

---------

Co-authored-by: Raj Janardhan <rajjanardhan@Rajs-Laptop.attlocal.net>
2023-08-01 21:17:26 -07:00
Leonid Kuligin
b4a126ae71
Updated docs on Vertex AI going GA (#8531)
#8074

Co-authored-by: Leonid Kuligin <kuligin@google.com>
2023-07-31 17:15:04 -07:00
Matthew DeGuzman
844eca98d5
Add LLaMa Formatter and AzureML Chat Endpoint (#8382)
## Description

Microsoft and Meta recently [announced their
collaboration](https://blogs.microsoft.com/blog/2023/07/18/microsoft-and-meta-expand-their-ai-partnership-with-llama-2-on-azure-and-windows/)
on LLaMa2. This PR extends the current LLM wrapper and introduces a new
Chat Model wrapper for AzureML to support LLaMa2.

## Dependencies

No dependencies added :)

## Twitter Handles

[@matthew_d13](https://twitter.com/matthew_d13)
[@prakhar_in](https://twitter.com/prakhar_in)

maintainers - @hwchase17, @baskaryan
2023-07-31 16:26:25 -07:00
William FH
b7c0eb9ecb
Wfh/ref links (#8454) 2023-07-29 08:44:32 -07:00
HeTaoPKU
d5884017a9
Add Minimax llm model to langchain (#7645)
- Description: Minimax is a great AI startup from China, recently they
released their latest model and chat API, and the API is widely-spread
in China. As a result, I'd like to add the Minimax llm model to
Langchain.
- Tag maintainer: @hwchase17, @baskaryan

---------

Co-authored-by: the <tao.he@hulu.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-07-27 22:53:23 -07:00
Jiayi Ni
1efb9bae5f
FEAT: Integrate Xinference LLMs and Embeddings (#8171)
- [Xorbits
Inference(Xinference)](https://github.com/xorbitsai/inference) is a
powerful and versatile library designed to serve language, speech
recognition, and multimodal models. Xinference supports a variety of
GGML-compatible models including chatglm, whisper, and vicuna, and
utilizes heterogeneous hardware and a distributed architecture for
seamless cross-device and cross-server model deployment.
- This PR integrates Xinference models and Xinference embeddings into
LangChain.
- Dependencies: To install the depenedencies for this integration, run
    
    `pip install "xinference[all]"`
    
- Example Usage:

To start a local instance of Xinference, run `xinference`.

To deploy Xinference in a distributed cluster, first start an Xinference
supervisor using `xinference-supervisor`:

`xinference-supervisor -H "${supervisor_host}"`

Then, start the Xinference workers using `xinference-worker` on each
server you want to run them on.

`xinference-worker -e "http://${supervisor_host}:9997"`

To use Xinference with LangChain, you also need to launch a model. You
can use command line interface (CLI) to do so. Fo example: `xinference
launch -n vicuna-v1.3 -f ggmlv3 -q q4_0`. This launches a model named
vicuna-v1.3 with `model_format="ggmlv3"` and `quantization="q4_0"`. A
model UID is returned for you to use.

Now you can use Xinference with LangChain:

```python
from langchain.llms import Xinference

llm = Xinference(
    server_url="http://0.0.0.0:9997", # suppose the supervisor_host is "0.0.0.0"
    model_uid = {model_uid} # model UID returned from launching a model
)

llm(
    prompt="Q: where can we visit in the capital of France? A:",
    generate_config={"max_tokens": 1024},
)
```

You can also use RESTful client to launch a model:
```python
from xinference.client import RESTfulClient

client = RESTfulClient("http://0.0.0.0:9997")

model_uid = client.launch_model(model_name="vicuna-v1.3", model_size_in_billions=7, quantization="q4_0")
```

The following code block demonstrates how to use Xinference embeddings
with LangChain:
```python
from langchain.embeddings import XinferenceEmbeddings

xinference = XinferenceEmbeddings(
    server_url="http://0.0.0.0:9997",
    model_uid = model_uid
)
```

```python
query_result = xinference.embed_query("This is a test query")
```

```python
doc_result = xinference.embed_documents(["text A", "text B"])
```

Xinference is still under rapid development. Feel free to [join our
Slack
community](https://xorbitsio.slack.com/join/shared_invite/zt-1z3zsm9ep-87yI9YZ_B79HLB2ccTq4WA)
to get the latest updates!

- Request for review: @hwchase17, @baskaryan
- Twitter handle: https://twitter.com/Xorbitsio

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-07-27 21:23:19 -07:00
Karan V
a003a0baf6
fix(petals) allows to run models that aren't Bloom (Support for LLama and newer models) (#8356)
In this PR:

- Removed restricted model loading logic for Petals-Bloom
- Removed petals imports (DistributedBloomForCausalLM,
BloomTokenizerFast)
- Instead imported more generalized versions of loader
(AutoDistributedModelForCausalLM, AutoTokenizer)
- Updated the Petals example notebook to allow for a successful
installation of Petals in Apple Silicon Macs

- Tag maintainer: @hwchase17, @baskaryan

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-07-27 18:01:04 -07:00
William FH
01a9b06400
Add api cross ref linking (#8275)
Example of how it would show up in our python docs:


![image](https://github.com/langchain-ai/langchain/assets/13333726/0f0a88cc-ba4a-4778-bc47-118c66807f15)


Examples added to the reference docs:

https://api.python.langchain.com/en/wfh-api_crosslink/vectorstores/langchain.vectorstores.chroma.Chroma.html#langchain.vectorstores.chroma.Chroma


![image](https://github.com/langchain-ai/langchain/assets/13333726/dcd150de-cb56-4d42-b49a-a76a002a5a52)
2023-07-26 12:38:58 -07:00
William FH
0a16b3d84b
Update Integrations links (#8206) 2023-07-24 21:20:32 -07:00
Taqi Jaffri
8f158b72fc
Added stop sequence support to replicate (#8107)
Stop sequences are useful if you are doing long-running completions and
need to early-out rather than running for the full max_length... not
only does this save inference cost on Replicate, it is also much faster
if you are going to truncate the output later anyway.

Other LLMs support stop sequences natively (e.g. OpenAI) but I didn't
see this for Replicate so adding this via their prediction cancel
method.

Housekeeping: I ran `make format` and `make lint`, no issues reported in
the files I touched.

I did update the replicate integration test and ran `poetry run pytest
tests/integration_tests/llms/test_replicate.py` successfully.

Finally, I am @tjaffri https://twitter.com/tjaffri for feature
announcement tweets... or if you could please tag @docugami
https://twitter.com/docugami we would really appreciate that :-)

Co-authored-by: Taqi Jaffri <tjaffri@docugami.com>
2023-07-24 17:34:13 -07:00
Liu Ming
24f889f2bc
Change with_history option to False for ChatGLM by default (#8076)
ChatGLM LLM integration will by default accumulate conversation
history(with_history=True) to ChatGLM backend api, which is not expected
in most cases. This PR set with_history=False by default, user should
explicitly set llm.with_history=True to turn this feature on. Related
PR: #8048 #7774

---------

Co-authored-by: mlot <limpo2000@gmail.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-07-24 15:46:02 -07:00
Bagatur
c8c8635dc9
mv module integrations docs (#8101) 2023-07-23 23:23:16 -07:00