mirror of
https://github.com/hwchase17/langchain
synced 2024-10-29 17:07:25 +00:00
178 lines
5.1 KiB
Plaintext
178 lines
5.1 KiB
Plaintext
|
{
|
|||
|
"cells": [
|
|||
|
{
|
|||
|
"attachments": {},
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"# Baidu Qianfan\n",
|
|||
|
"\n",
|
|||
|
"Baidu AI Cloud Qianfan Platform is a one-stop large model development and service operation platform for enterprise developers. Qianfan not only provides including the model of Wenxin Yiyan (ERNIE-Bot) and the third-party open source models, but also provides various AI development tools and the whole set of development environment, which facilitates customers to use and develop large model applications easily.\n",
|
|||
|
"\n",
|
|||
|
"Basically, those model are split into the following type:\n",
|
|||
|
"\n",
|
|||
|
"- Embedding\n",
|
|||
|
"- Chat\n",
|
|||
|
"- Coompletion\n",
|
|||
|
"\n",
|
|||
|
"In this notebook, we will introduce how to use langchain with [Qianfan](https://cloud.baidu.com/doc/WENXINWORKSHOP/index.html) mainly in `Completion` corresponding\n",
|
|||
|
" to the package `langchain/llms` in langchain:\n",
|
|||
|
"\n",
|
|||
|
"\n",
|
|||
|
"\n",
|
|||
|
"## API Initialization\n",
|
|||
|
"\n",
|
|||
|
"To use the LLM services based on Baidu Qianfan, you have to initialize these parameters:\n",
|
|||
|
"\n",
|
|||
|
"You could either choose to init the AK,SK in enviroment variables or init params:\n",
|
|||
|
"\n",
|
|||
|
"```base\n",
|
|||
|
"export QIANFAN_AK=XXX\n",
|
|||
|
"export QIANFAN_SK=XXX\n",
|
|||
|
"```\n",
|
|||
|
"\n",
|
|||
|
"## Current supported models:\n",
|
|||
|
"\n",
|
|||
|
"- ERNIE-Bot-turbo (default models)\n",
|
|||
|
"- ERNIE-Bot\n",
|
|||
|
"- BLOOMZ-7B\n",
|
|||
|
"- Llama-2-7b-chat\n",
|
|||
|
"- Llama-2-13b-chat\n",
|
|||
|
"- Llama-2-70b-chat\n",
|
|||
|
"- Qianfan-BLOOMZ-7B-compressed\n",
|
|||
|
"- Qianfan-Chinese-Llama-2-7B\n",
|
|||
|
"- ChatGLM2-6B-32K\n",
|
|||
|
"- AquilaChat-7B"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "code",
|
|||
|
"execution_count": null,
|
|||
|
"metadata": {},
|
|||
|
"outputs": [],
|
|||
|
"source": [
|
|||
|
"\"\"\"For basic init and call\"\"\"\n",
|
|||
|
"from langchain.llms.baidu_qianfan_endpoint import QianfanLLMEndpoint\n",
|
|||
|
"\n",
|
|||
|
"import os\n",
|
|||
|
"\n",
|
|||
|
"os.environ[\"QIANFAN_AK\"] = \"xx\"\n",
|
|||
|
"os.environ[\"QIANFAN_SK\"] = \"xx\"\n",
|
|||
|
"\n",
|
|||
|
"llm = QianfanLLMEndpoint(streaming=True, ak=\"xx\", sk=\"xx\")\n",
|
|||
|
"res = llm(\"hi\")\n"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "code",
|
|||
|
"execution_count": null,
|
|||
|
"metadata": {},
|
|||
|
"outputs": [],
|
|||
|
"source": [
|
|||
|
"\n",
|
|||
|
"\"\"\"Test for llm generate \"\"\"\n",
|
|||
|
"res = llm.generate(prompts=[\"hillo?\"])\n",
|
|||
|
"import asyncio\n",
|
|||
|
"\"\"\"Test for llm aio generate\"\"\"\n",
|
|||
|
"async def run_aio_generate():\n",
|
|||
|
" resp = await llm.agenerate(prompts=[\"Write a 20-word article about rivers.\"])\n",
|
|||
|
" print(resp)\n",
|
|||
|
"\n",
|
|||
|
"await run_aio_generate()\n",
|
|||
|
"\n",
|
|||
|
"\"\"\"Test for llm stream\"\"\"\n",
|
|||
|
"for res in llm.stream(\"write a joke.\"):\n",
|
|||
|
" print(res)\n",
|
|||
|
"\n",
|
|||
|
"\"\"\"Test for llm aio stream\"\"\"\n",
|
|||
|
"async def run_aio_stream():\n",
|
|||
|
" async for res in llm.astream(\"Write a 20-word article about mountains\"):\n",
|
|||
|
" print(res)\n",
|
|||
|
"\n",
|
|||
|
"await run_aio_stream()\n"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"attachments": {},
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"## Use different models in Qianfan\n",
|
|||
|
"\n",
|
|||
|
"In the case you want to deploy your own model based on EB or serval open sources model, you could follow these steps:\n",
|
|||
|
"\n",
|
|||
|
"- 1. (Optional, if the model are included in the default models, skip it)Deploy your model in Qianfan Console, get your own customized deploy endpoint.\n",
|
|||
|
"- 2. Set up the field called `endpoint` in the initlization:"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "code",
|
|||
|
"execution_count": null,
|
|||
|
"metadata": {},
|
|||
|
"outputs": [],
|
|||
|
"source": [
|
|||
|
"llm = QianfanLLMEndpoint(qianfan_ak='xxx', \n",
|
|||
|
" qianfan_sk='xxx', \n",
|
|||
|
" streaming=True, \n",
|
|||
|
" model=\"ERNIE-Bot-turbo\",\n",
|
|||
|
" endpoint=\"eb-instant\",\n",
|
|||
|
" )\n",
|
|||
|
"res = llm(\"hi\")"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"attachments": {},
|
|||
|
"cell_type": "markdown",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"## Model Params:\n",
|
|||
|
"\n",
|
|||
|
"For now, only `ERNIE-Bot` and `ERNIE-Bot-turbo` support model params below, we might support more models in the future.\n",
|
|||
|
"\n",
|
|||
|
"- temperature\n",
|
|||
|
"- top_p\n",
|
|||
|
"- penalty_score\n"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "code",
|
|||
|
"execution_count": null,
|
|||
|
"metadata": {},
|
|||
|
"outputs": [],
|
|||
|
"source": [
|
|||
|
"res = llm.generate(prompts=[\"hi\"], streaming=True, **{'top_p': 0.4, 'temperature': 0.1, 'penalty_score': 1})\n",
|
|||
|
"\n",
|
|||
|
"for r in res:\n",
|
|||
|
" print(r)"
|
|||
|
]
|
|||
|
}
|
|||
|
],
|
|||
|
"metadata": {
|
|||
|
"kernelspec": {
|
|||
|
"display_name": "base",
|
|||
|
"language": "python",
|
|||
|
"name": "python3"
|
|||
|
},
|
|||
|
"language_info": {
|
|||
|
"codemirror_mode": {
|
|||
|
"name": "ipython",
|
|||
|
"version": 3
|
|||
|
},
|
|||
|
"file_extension": ".py",
|
|||
|
"mimetype": "text/x-python",
|
|||
|
"name": "python",
|
|||
|
"nbconvert_exporter": "python",
|
|||
|
"pygments_lexer": "ipython3",
|
|||
|
"version": "3.11.4"
|
|||
|
},
|
|||
|
"orig_nbformat": 4,
|
|||
|
"vscode": {
|
|||
|
"interpreter": {
|
|||
|
"hash": "6fa70026b407ae751a5c9e6bd7f7d482379da8ad616f98512780b705c84ee157"
|
|||
|
}
|
|||
|
}
|
|||
|
},
|
|||
|
"nbformat": 4,
|
|||
|
"nbformat_minor": 2
|
|||
|
}
|