mirror of https://github.com/hwchase17/langchain
docs: add Shale Protocol integration guide (#5814)
This PR adds documentation for Shale Protocol's integration with LangChain. [Shale Protocol](https://shaleprotocol.com) provides forever-free production-ready inference APIs to the open-source community. We have global data centers and plan to support all major open LLMs (estimated ~1,000 by 2025). The team consists of software and ML engineers, AI researchers, designers, and operators across North America and Asia. Combined together, the team has 50+ years experience in machine learning, cloud infrastructure, software engineering and product development. Team members have worked at places like Google and Microsoft. #### Who can review? Tag maintainers/contributors who might be interested: - @hwchase17 - @agola11 --------- Co-authored-by: Karen Sheng <46656667+karensheng@users.noreply.github.com>pull/5619/head
parent
137da7e4b6
commit
f15763518a
@ -0,0 +1,43 @@
|
||||
# Shale Protocol
|
||||
|
||||
[Shale Protocol](https://shaleprotocol.com) provides production-ready inference APIs for open LLMs. It's a Plug & Play API as it's hosted on a highly scalable GPU cloud infrastructure.
|
||||
|
||||
Our free tier supports up to 1K daily requests per key as we want to eliminate the barrier for anyone to start building genAI apps with LLMs.
|
||||
|
||||
With Shale Protocol, developers/researchers can create apps and explore the capabilities of open LLMs at no cost.
|
||||
|
||||
This page covers how Shale-Serve API can be incorporated with LangChain.
|
||||
|
||||
As of June 2023, the API supports Vicuna-13B by default. We are going to support more LLMs such as Falcon-40B in future releases.
|
||||
|
||||
|
||||
## How to
|
||||
|
||||
### 1. Find the link to our Discord on https://shaleprotocol.com. Generate an API key through the "Shale Bot" on our Discord. No credit card is required and no free trials. It's a forever free tier with 1K limit per day per API key.
|
||||
|
||||
### 2. Use https://shale.live/v1 as OpenAI API drop-in replacement
|
||||
|
||||
For example
|
||||
```python
|
||||
from langchain.llms import OpenAI
|
||||
from langchain import PromptTemplate, LLMChain
|
||||
|
||||
import os
|
||||
os.environ['OPENAI_API_BASE'] = "https://shale.live/v1"
|
||||
os.environ['OPENAI_API_KEY'] = "ENTER YOUR API KEY"
|
||||
|
||||
llm = OpenAI()
|
||||
|
||||
template = """Question: {question}
|
||||
|
||||
# Answer: Let's think step by step."""
|
||||
|
||||
prompt = PromptTemplate(template=template, input_variables=["question"])
|
||||
|
||||
llm_chain = LLMChain(prompt=prompt, llm=llm)
|
||||
|
||||
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
|
||||
|
||||
llm_chain.run(question)
|
||||
|
||||
```
|
Loading…
Reference in New Issue