mirror of
https://github.com/hwchase17/langchain
synced 2024-10-31 15:20:26 +00:00
160 lines
3.8 KiB
Plaintext
160 lines
3.8 KiB
Plaintext
|
{
|
||
|
"cells": [
|
||
|
{
|
||
|
"cell_type": "markdown",
|
||
|
"id": "026cc336",
|
||
|
"metadata": {},
|
||
|
"source": [
|
||
|
"# OpenLLM\n",
|
||
|
"\n",
|
||
|
"[🦾 OpenLLM](https://github.com/bentoml/OpenLLM) is an open platform for operating large language models (LLMs) in production. It enables developers to easily run inference with any open-source LLMs, deploy to the cloud or on-premises, and build powerful AI apps."
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "markdown",
|
||
|
"id": "da0ddca1",
|
||
|
"metadata": {},
|
||
|
"source": [
|
||
|
"## Installation\n",
|
||
|
"\n",
|
||
|
"Install `openllm` through [PyPI](https://pypi.org/project/openllm/)"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "code",
|
||
|
"execution_count": null,
|
||
|
"id": "6601c03b",
|
||
|
"metadata": {},
|
||
|
"outputs": [],
|
||
|
"source": [
|
||
|
"!pip install openllm"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "markdown",
|
||
|
"id": "90174fe3",
|
||
|
"metadata": {},
|
||
|
"source": [
|
||
|
"## Launch OpenLLM server locally\n",
|
||
|
"\n",
|
||
|
"To start an LLM server, use `openllm start` command. For example, to start a dolly-v2 server, run the following command from a terminal:\n",
|
||
|
"\n",
|
||
|
"```bash\n",
|
||
|
"openllm start dolly-v2\n",
|
||
|
"```\n",
|
||
|
"\n",
|
||
|
"\n",
|
||
|
"## Wrapper"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "code",
|
||
|
"execution_count": null,
|
||
|
"id": "35b6bf60",
|
||
|
"metadata": {},
|
||
|
"outputs": [],
|
||
|
"source": [
|
||
|
"from langchain.llms import OpenLLM\n",
|
||
|
"\n",
|
||
|
"server_url = \"http://localhost:3000\" # Replace with remote host if you are running on a remote server \n",
|
||
|
"llm = OpenLLM(server_url=server_url)"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "markdown",
|
||
|
"id": "4f830f9d",
|
||
|
"metadata": {},
|
||
|
"source": [
|
||
|
"### Optional: Local LLM Inference\n",
|
||
|
"\n",
|
||
|
"You may also choose to initialize an LLM managed by OpenLLM locally from current process. This is useful for development purpose and allows developers to quickly try out different types of LLMs.\n",
|
||
|
"\n",
|
||
|
"When moving LLM applications to production, we recommend deploying the OpenLLM server separately and access via the `server_url` option demonstrated above.\n",
|
||
|
"\n",
|
||
|
"To load an LLM locally via the LangChain wrapper:"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "code",
|
||
|
"execution_count": null,
|
||
|
"id": "82c392b6",
|
||
|
"metadata": {},
|
||
|
"outputs": [],
|
||
|
"source": [
|
||
|
"from langchain.llms import OpenLLM\n",
|
||
|
"\n",
|
||
|
"llm = OpenLLM(\n",
|
||
|
" model_name=\"dolly-v2\",\n",
|
||
|
" model_id=\"databricks/dolly-v2-3b\",\n",
|
||
|
" temperature=0.94,\n",
|
||
|
" repetition_penalty=1.2,\n",
|
||
|
")"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "markdown",
|
||
|
"id": "f15ebe0d",
|
||
|
"metadata": {},
|
||
|
"source": [
|
||
|
"### Integrate with a LLMChain"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "code",
|
||
|
"execution_count": 11,
|
||
|
"id": "8b02a97a",
|
||
|
"metadata": {},
|
||
|
"outputs": [
|
||
|
{
|
||
|
"name": "stdout",
|
||
|
"output_type": "stream",
|
||
|
"text": [
|
||
|
"iLkb\n"
|
||
|
]
|
||
|
}
|
||
|
],
|
||
|
"source": [
|
||
|
"from langchain import PromptTemplate, LLMChain\n",
|
||
|
"\n",
|
||
|
"template = \"What is a good name for a company that makes {product}?\"\n",
|
||
|
"\n",
|
||
|
"prompt = PromptTemplate(template=template, input_variables=[\"product\"])\n",
|
||
|
"\n",
|
||
|
"llm_chain = LLMChain(prompt=prompt, llm=llm)\n",
|
||
|
"\n",
|
||
|
"generated = llm_chain.run(product=\"mechanical keyboard\")\n",
|
||
|
"print(generated)"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "code",
|
||
|
"execution_count": null,
|
||
|
"id": "56cb4bc0",
|
||
|
"metadata": {},
|
||
|
"outputs": [],
|
||
|
"source": []
|
||
|
}
|
||
|
],
|
||
|
"metadata": {
|
||
|
"kernelspec": {
|
||
|
"display_name": "Python 3 (ipykernel)",
|
||
|
"language": "python",
|
||
|
"name": "python3"
|
||
|
},
|
||
|
"language_info": {
|
||
|
"codemirror_mode": {
|
||
|
"name": "ipython",
|
||
|
"version": 3
|
||
|
},
|
||
|
"file_extension": ".py",
|
||
|
"mimetype": "text/x-python",
|
||
|
"name": "python",
|
||
|
"nbconvert_exporter": "python",
|
||
|
"pygments_lexer": "ipython3",
|
||
|
"version": "3.10.10"
|
||
|
}
|
||
|
},
|
||
|
"nbformat": 4,
|
||
|
"nbformat_minor": 5
|
||
|
}
|