mirror of
https://github.com/hwchase17/langchain
synced 2024-11-10 01:10:59 +00:00
44 lines
1.5 KiB
Markdown
44 lines
1.5 KiB
Markdown
# Shale Protocol
|
|
|
|
[Shale Protocol](https://shaleprotocol.com) provides production-ready inference APIs for open LLMs. It's a Plug & Play API as it's hosted on a highly scalable GPU cloud infrastructure.
|
|
|
|
Our free tier supports up to 1K daily requests per key as we want to eliminate the barrier for anyone to start building genAI apps with LLMs.
|
|
|
|
With Shale Protocol, developers/researchers can create apps and explore the capabilities of open LLMs at no cost.
|
|
|
|
This page covers how Shale-Serve API can be incorporated with LangChain.
|
|
|
|
As of June 2023, the API supports Vicuna-13B by default. We are going to support more LLMs such as Falcon-40B in future releases.
|
|
|
|
|
|
## How to
|
|
|
|
### 1. Find the link to our Discord on https://shaleprotocol.com. Generate an API key through the "Shale Bot" on our Discord. No credit card is required and no free trials. It's a forever free tier with 1K limit per day per API key.
|
|
|
|
### 2. Use https://shale.live/v1 as OpenAI API drop-in replacement
|
|
|
|
For example
|
|
```python
|
|
from langchain.llms import OpenAI
|
|
from langchain import PromptTemplate, LLMChain
|
|
|
|
import os
|
|
os.environ['OPENAI_API_BASE'] = "https://shale.live/v1"
|
|
os.environ['OPENAI_API_KEY'] = "ENTER YOUR API KEY"
|
|
|
|
llm = OpenAI()
|
|
|
|
template = """Question: {question}
|
|
|
|
# Answer: Let's think step by step."""
|
|
|
|
prompt = PromptTemplate(template=template, input_variables=["question"])
|
|
|
|
llm_chain = LLMChain(prompt=prompt, llm=llm)
|
|
|
|
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
|
|
|
|
llm_chain.run(question)
|
|
|
|
```
|