# Docs: compound ecosystem and integrations **Problem statement:** We have a big overlap between the References/Integrations and Ecosystem/LongChain Ecosystem pages. It confuses users. It creates a situation when new integration is added only on one of these pages, which creates even more confusion. - removed References/Integrations page (but move all its information into the individual integration pages - in the next PR). - renamed Ecosystem/LongChain Ecosystem into Integrations/Integrations. I like the Ecosystem term. It is more generic and semantically richer than the Integration term. But it mentally overloads users. The `integration` term is more concrete. UPDATE: after discussion, the Ecosystem is the term. Ecosystem/Integrations is the page (in place of Ecosystem/LongChain Ecosystem). As a result, a user gets a single place to start with the individual integration.
1.9 KiB
Replicate
This page covers how to run models on Replicate within LangChain.
Installation and Setup
- Create a Replicate account. Get your API key and set it as an environment variable (
REPLICATE_API_TOKEN
) - Install the Replicate python client with
pip install replicate
Calling a model
Find a model on the Replicate explore page, and then paste in the model name and version in this format: owner-name/model-name:version
For example, for this dolly model, click on the API tab. The model name/version would be: "replicate/dolly-v2-12b:ef0e1aefc61f8e096ebe4db6b2bacc297daf2ef6899f0f7e001ec445893500e5"
Only the model
param is required, but any other model parameters can also be passed in with the format input={model_param: value, ...}
For example, if we were running stable diffusion and wanted to change the image dimensions:
Replicate(model="stability-ai/stable-diffusion:db21e45d3f7023abc2a46ee38a23973f6dce16bb082a930b0c49861f96d1e5bf", input={'image_dimensions': '512x512'})
Note that only the first output of a model will be returned. From here, we can initialize our model:
llm = Replicate(model="replicate/dolly-v2-12b:ef0e1aefc61f8e096ebe4db6b2bacc297daf2ef6899f0f7e001ec445893500e5")
And run it:
prompt = """
Answer the following yes/no question by reasoning step by step.
Can a dog drive a car?
"""
llm(prompt)
We can call any Replicate model (not just LLMs) using this syntax. For example, we can call Stable Diffusion:
text2image = Replicate(model="stability-ai/stable-diffusion:db21e45d3f7023abc2a46ee38a23973f6dce16bb082a930b0c49861f96d1e5bf", input={'image_dimensions':'512x512'})
image_output = text2image("A cat riding a motorcycle by Picasso")