community[docs]: Add content for the Lora adapter in the VLLM page. (#27788)

**Description:**
I added code for lora_request in the community package, but I forgot to
add content to the VLLM page. So, I will do that now. #27731

---------

Co-authored-by: Um Changyong <changyong.um@sfa.co.kr>
This commit is contained in:
Changyong Um 2024-11-01 01:44:35 +09:00 committed by GitHub
parent 0172d938b4
commit d9163e7afa
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194

View File

@ -246,6 +246,35 @@
")\n",
"print(llm.invoke(\"Rome is\"))"
]
},
{
"cell_type": "markdown",
"id": "bd3f0f51",
"metadata": {},
"source": [
"## LoRA adapter\n",
"LoRA adapters can be used with any vLLM model that implements `SupportsLoRA`."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "2682ca6c",
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.llms import VLLM\n",
"from vllm.lora.request import LoRARequest\n",
"\n",
"llm = VLLM(model=\"meta-llama/Llama-2-7b-hf\", enable_lora=True)\n",
"\n",
"LoRA_ADAPTER_PATH = \"path/to/adapter\"\n",
"lora_adapter = LoRARequest(\"lora_adapter\", 1, LoRA_ADAPTER_PATH)\n",
"\n",
"print(\n",
" llm.invoke(\"What are some popular Korean street foods?\", lora_request=lora_adapter)\n",
")"
]
}
],
"metadata": {