From fdbeb52756d40e88202a722600188b22a9731fc4 Mon Sep 17 00:00:00 2001 From: Leonid Ganeline Date: Sun, 20 Aug 2023 17:21:45 -0700 Subject: [PATCH] `Qwen` model example (#9516) added an example for `Qwen-7B` model on `HugginfFaceHub` :hugs: --- .../integrations/llms/huggingface_hub.ipynb | 50 ++++++++++++++++--- 1 file changed, 44 insertions(+), 6 deletions(-) diff --git a/docs/extras/integrations/llms/huggingface_hub.ipynb b/docs/extras/integrations/llms/huggingface_hub.ipynb index c321930da0..f635e5c679 100644 --- a/docs/extras/integrations/llms/huggingface_hub.ipynb +++ b/docs/extras/integrations/llms/huggingface_hub.ipynb @@ -44,7 +44,7 @@ }, { "cell_type": "code", - "execution_count": 2, + "execution_count": 1, "id": "d597a792-354c-4ca5-b483-5965eec5d63d", "metadata": {}, "outputs": [ @@ -66,7 +66,7 @@ }, { "cell_type": "code", - "execution_count": 3, + "execution_count": 2, "id": "b8c5b88c-e4b8-4d0d-9a35-6e8f106452c2", "metadata": {}, "outputs": [], @@ -86,7 +86,7 @@ }, { "cell_type": "code", - "execution_count": 14, + "execution_count": null, "id": "3fe7d1d1-241d-426a-acff-e208f1088871", "metadata": {}, "outputs": [], @@ -96,7 +96,7 @@ }, { "cell_type": "code", - "execution_count": 15, + "execution_count": 4, "id": "6620f39b-3d32-4840-8931-ff7d2c3e47e8", "metadata": {}, "outputs": [], @@ -106,7 +106,7 @@ }, { "cell_type": "code", - "execution_count": 16, + "execution_count": 5, "id": "44adc1a0-9c0a-4f1e-af5a-fe04222e78d7", "metadata": {}, "outputs": [], @@ -358,10 +358,48 @@ "print(llm_chain.run(question))" ] }, + { + "cell_type": "markdown", + "id": "4f2e5132-1713-42d7-919a-8c313744ce95", + "metadata": {}, + "source": [ + "### `Qwen`, by `Alibaba Cloud`\n", + "\n", + ">`Tongyi Qianwen-7B` (`Qwen-7B`) is a model with a scale of 7 billion parameters in the `Tongyi Qianwen` large model series developed by `Alibaba Cloud`. `Qwen-7B` is a large language model based on Transformer, which is trained on ultra-large-scale pre-training data.\n", + "\n", + "See [more information on HuggingFace](https://huggingface.co/Qwen/Qwen-7B) of on [GitHub](https://github.com/QwenLM/Qwen-7B).\n", + "\n", + "See here a [big example for LangChain integration and Qwen](https://github.com/QwenLM/Qwen-7B/blob/main/examples/langchain_tooluse.ipynb)." + ] + }, + { + "cell_type": "code", + "execution_count": 6, + "id": "f598b1ca-77c7-40f1-a83f-c21ea9910c88", + "metadata": {}, + "outputs": [], + "source": [ + "repo_id = \"Qwen/Qwen-7B\"" + ] + }, { "cell_type": "code", "execution_count": null, - "id": "a80bc30a-4040-417f-8094-d2c81c423b76", + "id": "2c97f4e2-d401-44fb-9da7-b60b2e2cc663", + "metadata": {}, + "outputs": [], + "source": [ + "llm = HuggingFaceHub(\n", + " repo_id=repo_id, model_kwargs={\"max_length\": 128, \"temperature\": 0.5}\n", + ")\n", + "llm_chain = LLMChain(prompt=prompt, llm=llm)\n", + "print(llm_chain.run(question))" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "1dd67c1e-1efc-4def-bde4-2e5265725303", "metadata": {}, "outputs": [], "source": []