mirror of
https://github.com/hwchase17/langchain
synced 2024-11-04 06:00:26 +00:00
c33883a40e
- fixed the Cohere example title (bug in #3041, sorry for it) - fixed the runhouse.ipynb file name inconsistency
1.2 KiB
1.2 KiB
Runhouse
This page covers how to use the Runhouse ecosystem within LangChain. It is broken into three parts: installation and setup, LLMs, and Embeddings.
Installation and Setup
- Install the Python SDK with
pip install runhouse
- If you'd like to use on-demand cluster, check your cloud credentials with
sky check
Self-hosted LLMs
For a basic self-hosted LLM, you can use the SelfHostedHuggingFaceLLM
class. For more
custom LLMs, you can use the SelfHostedPipeline
parent class.
from langchain.llms import SelfHostedPipeline, SelfHostedHuggingFaceLLM
For a more detailed walkthrough of the Self-hosted LLMs, see this notebook
Self-hosted Embeddings
There are several ways to use self-hosted embeddings with LangChain via Runhouse.
For a basic self-hosted embedding from a Hugging Face Transformers model, you can use
the SelfHostedEmbedding
class.
from langchain.llms import SelfHostedPipeline, SelfHostedHuggingFaceLLM
For a more detailed walkthrough of the Self-hosted Embeddings, see this notebook