mirror of
https://github.com/hwchase17/langchain
synced 2024-10-31 15:20:26 +00:00
2667ddc686
**Description: a description of the change** Fixed `make docs_build` and related scripts which caused errors. There are several changes. First, I made the build of the documentation and the API Reference into two separate commands. This is because it takes less time to build. The commands for documents are `make docs_build`, `make docs_clean`, and `make docs_linkcheck`. The commands for API Reference are `make api_docs_build`, `api_docs_clean`, and `api_docs_linkcheck`. It looked like `docs/.local_build.sh` could be used to build the documentation, so I used that. Since `.local_build.sh` was also building API Rerefence internally, I removed that process. `.local_build.sh` also added some Bash options to stop in error or so. Futher more added `cd "${SCRIPT_DIR}"` at the beginning so that the script will work no matter which directory it is executed in. `docs/api_reference/api_reference.rst` is removed, because which is generated by `docs/api_reference/create_api_rst.py`, and added it to .gitignore. Finally, the description of CONTRIBUTING.md was modified. **Issue: the issue # it fixes (if applicable)** https://github.com/hwchase17/langchain/issues/6413 **Dependencies: any dependencies required for this change** `nbdoc` was missing in group docs so it was added. I installed it with the `poetry add --group docs nbdoc` command. I am concerned if any modifications are needed to poetry.lock. I would greatly appreciate it if you could pay close attention to this file during the review. **Tag maintainer** - General / Misc / if you don't know who to tag: @baskaryan If this PR needs any additional changes, I'll be happy to make them! --------- Co-authored-by: Bagatur <baskaryan@gmail.com>
160 lines
3.8 KiB
Plaintext
160 lines
3.8 KiB
Plaintext
{
|
|
"cells": [
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "026cc336",
|
|
"metadata": {},
|
|
"source": [
|
|
"# OpenLLM\n",
|
|
"\n",
|
|
"[🦾 OpenLLM](https://github.com/bentoml/OpenLLM) is an open platform for operating large language models (LLMs) in production. It enables developers to easily run inference with any open-source LLMs, deploy to the cloud or on-premises, and build powerful AI apps."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "da0ddca1",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Installation\n",
|
|
"\n",
|
|
"Install `openllm` through [PyPI](https://pypi.org/project/openllm/)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"id": "6601c03b",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"!pip install openllm"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "90174fe3",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Launch OpenLLM server locally\n",
|
|
"\n",
|
|
"To start an LLM server, use `openllm start` command. For example, to start a dolly-v2 server, run the following command from a terminal:\n",
|
|
"\n",
|
|
"```bash\n",
|
|
"openllm start dolly-v2\n",
|
|
"```\n",
|
|
"\n",
|
|
"\n",
|
|
"## Wrapper"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"id": "35b6bf60",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"from langchain.llms import OpenLLM\n",
|
|
"\n",
|
|
"server_url = \"http://localhost:3000\" # Replace with remote host if you are running on a remote server\n",
|
|
"llm = OpenLLM(server_url=server_url)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "4f830f9d",
|
|
"metadata": {},
|
|
"source": [
|
|
"### Optional: Local LLM Inference\n",
|
|
"\n",
|
|
"You may also choose to initialize an LLM managed by OpenLLM locally from current process. This is useful for development purpose and allows developers to quickly try out different types of LLMs.\n",
|
|
"\n",
|
|
"When moving LLM applications to production, we recommend deploying the OpenLLM server separately and access via the `server_url` option demonstrated above.\n",
|
|
"\n",
|
|
"To load an LLM locally via the LangChain wrapper:"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"id": "82c392b6",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"from langchain.llms import OpenLLM\n",
|
|
"\n",
|
|
"llm = OpenLLM(\n",
|
|
" model_name=\"dolly-v2\",\n",
|
|
" model_id=\"databricks/dolly-v2-3b\",\n",
|
|
" temperature=0.94,\n",
|
|
" repetition_penalty=1.2,\n",
|
|
")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "f15ebe0d",
|
|
"metadata": {},
|
|
"source": [
|
|
"### Integrate with a LLMChain"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 11,
|
|
"id": "8b02a97a",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"iLkb\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"from langchain import PromptTemplate, LLMChain\n",
|
|
"\n",
|
|
"template = \"What is a good name for a company that makes {product}?\"\n",
|
|
"\n",
|
|
"prompt = PromptTemplate(template=template, input_variables=[\"product\"])\n",
|
|
"\n",
|
|
"llm_chain = LLMChain(prompt=prompt, llm=llm)\n",
|
|
"\n",
|
|
"generated = llm_chain.run(product=\"mechanical keyboard\")\n",
|
|
"print(generated)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"id": "56cb4bc0",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": []
|
|
}
|
|
],
|
|
"metadata": {
|
|
"kernelspec": {
|
|
"display_name": "Python 3 (ipykernel)",
|
|
"language": "python",
|
|
"name": "python3"
|
|
},
|
|
"language_info": {
|
|
"codemirror_mode": {
|
|
"name": "ipython",
|
|
"version": 3
|
|
},
|
|
"file_extension": ".py",
|
|
"mimetype": "text/x-python",
|
|
"name": "python",
|
|
"nbconvert_exporter": "python",
|
|
"pygments_lexer": "ipython3",
|
|
"version": "3.10.10"
|
|
}
|
|
},
|
|
"nbformat": 4,
|
|
"nbformat_minor": 5
|
|
}
|