Add HuggingFace Examples (#3187)

Add a Pipeline example and add other models in th ehub notebook

To close issue
[#3077](https://github.com/hwchase17/langchain/issues/3099)
This commit is contained in:
Zander Chase 2023-04-19 17:08:10 -07:00 committed by GitHub
parent 6adf2d1c39
commit c757c3cde4
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
2 changed files with 314 additions and 15 deletions

View File

@ -31,7 +31,7 @@
}, },
"outputs": [], "outputs": [],
"source": [ "source": [
"!pip install huggingface_hub" "!pip install huggingface_hub > /dev/null"
] ]
}, },
{ {
@ -55,41 +55,195 @@
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"import os\n",
"os.environ[\"HUGGINGFACEHUB_API_TOKEN\"] = HUGGINGFACEHUB_API_TOKEN" "os.environ[\"HUGGINGFACEHUB_API_TOKEN\"] = HUGGINGFACEHUB_API_TOKEN"
] ]
}, },
{
"cell_type": "markdown",
"id": "84dd44c1-c428-41f3-a911-520281386c94",
"metadata": {},
"source": [
"**Select a Model**"
]
},
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 41, "execution_count": null,
"id": "39c7eeac-01c4-486b-9480-e828a9e73e78",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain import HuggingFaceHub\n",
"\n",
"repo_id = \"google/flan-t5-xl\" # See https://huggingface.co/models?pipeline_tag=text-generation&sort=downloads for some other options\n",
"\n",
"llm = HuggingFaceHub(repo_id=repo_id, model_kwargs={\"temperature\":0, \"max_length\":64})"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3acf0069", "id": "3acf0069",
"metadata": {}, "metadata": {},
"outputs": [ "outputs": [],
{
"name": "stdout",
"output_type": "stream",
"text": [
"The FIFA World Cup is a football tournament that is played every 4 years. The year 1994 was the 44th FIFA World Cup. The final answer: Brazil.\n"
]
}
],
"source": [ "source": [
"from langchain import PromptTemplate, HuggingFaceHub, LLMChain\n", "from langchain import PromptTemplate, LLMChain\n",
"\n", "\n",
"template = \"\"\"Question: {question}\n", "template = \"\"\"Question: {question}\n",
"\n", "\n",
"Answer: Let's think step by step.\"\"\"\n", "Answer: Let's think step by step.\"\"\"\n",
"prompt = PromptTemplate(template=template, input_variables=[\"question\"])\n", "prompt = PromptTemplate(template=template, input_variables=[\"question\"])\n",
"llm_chain = LLMChain(prompt=prompt, llm=HuggingFaceHub(repo_id=\"google/flan-t5-xl\", model_kwargs={\"temperature\":0, \"max_length\":64}))\n", "llm_chain = LLMChain(prompt=prompt, llm=llm)\n",
"\n", "\n",
"question = \"Who won the FIFA World Cup in the year 1994? \"\n", "question = \"Who won the FIFA World Cup in the year 1994? \"\n",
"\n", "\n",
"print(llm_chain.run(question))" "print(llm_chain.run(question))"
] ]
}, },
{
"cell_type": "markdown",
"id": "ddaa06cf-95ec-48ce-b0ab-d892a7909693",
"metadata": {},
"source": [
"## Examples\n",
"\n",
"Below are some examples of models you can access through the Hugging Face Hub integration."
]
},
{
"cell_type": "markdown",
"id": "4fa9337e-ccb5-4c52-9b7c-1653148bc256",
"metadata": {},
"source": [
"### StableLM, by Stability AI\n",
"\n",
"See [Stability AI's](https://huggingface.co/stabilityai) organization page for a list of available models."
]
},
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"id": "843a3837", "id": "36a1ce01-bd46-451f-8ee6-61c8f4bd665a",
"metadata": {},
"outputs": [],
"source": [
"repo_id = \"stabilityai/stablelm-tuned-alpha-3b\"\n",
"# Others include stabilityai/stablelm-base-alpha-3b\n",
"# as well as 7B parameter versions"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b5654cea-60b0-4f40-ab34-06ba1eca810d",
"metadata": {},
"outputs": [],
"source": [
"llm = HuggingFaceHub(repo_id=repo_id, model_kwargs={\"temperature\":0, \"max_length\":64})"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "2f19d0dc-c987-433f-a8d6-b1214e8ee067",
"metadata": {},
"outputs": [],
"source": [
"# Reuse the prompt and question from above.\n",
"llm_chain = LLMChain(prompt=prompt, llm=llm)\n",
"print(llm_chain.run(question))"
]
},
{
"cell_type": "markdown",
"id": "1a5c97af-89bc-4e59-95c1-223742a9160b",
"metadata": {},
"source": [
"### Dolly, by DataBricks\n",
"\n",
"See [DataBricks](https://huggingface.co/databricks) organization page for a list of available models."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "521fcd2b-8e38-4920-b407-5c7d330411c9",
"metadata": {},
"outputs": [],
"source": [
"from langchain import HuggingFaceHub\n",
"\n",
"repo_id = \"databricks/dolly-v2-3b\"\n",
"\n",
"llm = HuggingFaceHub(repo_id=repo_id, model_kwargs={\"temperature\":0, \"max_length\":64})"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "9907ec3a-fe0c-4543-81c4-d42f9453f16c",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"# Reuse the prompt and question from above.\n",
"llm_chain = LLMChain(prompt=prompt, llm=llm)\n",
"print(llm_chain.run(question))"
]
},
{
"cell_type": "markdown",
"id": "03f6ae52-b5f9-4de6-832c-551cb3fa11ae",
"metadata": {},
"source": [
"### Camel, by Writer\n",
"\n",
"See [Writer's](https://huggingface.co/Writer) organization page for a list of available models."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "257a091d-750b-4910-ac08-fe1c7b3fd98b",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain import HuggingFaceHub\n",
"\n",
"repo_id = \"Writer/camel-5b-hf\" # See https://huggingface.co/Writer for other options\n",
"llm = HuggingFaceHub(repo_id=repo_id, model_kwargs={\"temperature\":0, \"max_length\":64})"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b06f6838-a11a-4d6a-88e3-91fa1747a2b3",
"metadata": {},
"outputs": [],
"source": [
"# Reuse the prompt and question from above.\n",
"llm_chain = LLMChain(prompt=prompt, llm=llm)\n",
"print(llm_chain.run(question))"
]
},
{
"cell_type": "markdown",
"id": "2bf838eb-1083-402f-b099-b07c452418c8",
"metadata": {},
"source": [
"**And many more!**"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "18c78880-65d7-41d0-9722-18090efb60e9",
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [] "source": []
@ -111,7 +265,7 @@
"name": "python", "name": "python",
"nbconvert_exporter": "python", "nbconvert_exporter": "python",
"pygments_lexer": "ipython3", "pygments_lexer": "ipython3",
"version": "3.10.6" "version": "3.11.2"
} }
}, },
"nbformat": 4, "nbformat": 4,

View File

@ -0,0 +1,145 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "959300d4",
"metadata": {},
"source": [
"# Hugging Face Local Pipelines\n",
"\n",
"Hugging Face models can be run locally through the `HuggingFacePipeline` class.\n",
"\n",
"The [Hugging Face Model Hub](https://huggingface.co/models) hosts over 120k models, 20k datasets, and 50k demo apps (Spaces), all open source and publicly available, in an online platform where people can easily collaborate and build ML together.\n",
"\n",
"These can be called from LangChain either through this local pipeline wrapper or by calling their hosted inference endpoints through the HuggingFaceHub class. For more information on the hosted pipelines, see the [HugigngFaceHub](huggingface_hub.ipynb) notebook."
]
},
{
"cell_type": "markdown",
"id": "4c1b8450-5eaf-4d34-8341-2d785448a1ff",
"metadata": {
"tags": []
},
"source": [
"To use, you should have the ``transformers`` python [package installed](https://pypi.org/project/transformers/)."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "d772b637-de00-4663-bd77-9bc96d798db2",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"!pip install transformers > /dev/null"
]
},
{
"cell_type": "markdown",
"id": "91ad075f-71d5-4bc8-ab91-cc0ad5ef16bb",
"metadata": {},
"source": [
"### Load the model"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "165ae236-962a-4763-8052-c4836d78a5d2",
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"WARNING:root:Failed to default session, using empty session: HTTPConnectionPool(host='localhost', port=8000): Max retries exceeded with url: /sessions (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x1117f9790>: Failed to establish a new connection: [Errno 61] Connection refused'))\n"
]
}
],
"source": [
"from langchain import HuggingFacePipeline\n",
"\n",
"llm = HuggingFacePipeline.from_model_id(model_id=\"bigscience/bloom-1b7\", task=\"text-generation\", model_kwargs={\"temperature\":0, \"max_length\":64})"
]
},
{
"cell_type": "markdown",
"id": "00104b27-0c15-4a97-b198-4512337ee211",
"metadata": {},
"source": [
"### Integrate the model in an LLMChain"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "3acf0069",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"/Users/wfh/code/lc/lckg/.venv/lib/python3.11/site-packages/transformers/generation/utils.py:1288: UserWarning: Using `max_length`'s default (64) to control the generation length. This behaviour is deprecated and will be removed from the config in v5 of Transformers -- we recommend using `max_new_tokens` to control the maximum length of the generation.\n",
" warnings.warn(\n",
"WARNING:root:Failed to persist run: HTTPConnectionPool(host='localhost', port=8000): Max retries exceeded with url: /chain-runs (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x144d06910>: Failed to establish a new connection: [Errno 61] Connection refused'))\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
" First, we need to understand what is an electroencephalogram. An electroencephalogram is a recording of brain activity. It is a recording of brain activity that is made by placing electrodes on the scalp. The electrodes are placed\n"
]
}
],
"source": [
"from langchain import PromptTemplate, LLMChain\n",
"\n",
"template = \"\"\"Question: {question}\n",
"\n",
"Answer: Let's think step by step.\"\"\"\n",
"prompt = PromptTemplate(template=template, input_variables=[\"question\"])\n",
"\n",
"llm_chain = LLMChain(prompt=prompt, llm=llm)\n",
"\n",
"question = \"What is electroencephalography?\"\n",
"\n",
"print(llm_chain.run(question))"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "843a3837",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.2"
}
},
"nbformat": 4,
"nbformat_minor": 5
}