add links in example nb with tei/tgi references (#25821)

I have validated langchain interface with tei/tgi works as expected when
TEI and TGI running on Intel Gaudi2. Adding some references to notebooks
to help users find relevant info.

---------

Co-authored-by: Rita Brugarolas <rbrugaro@idc708053.jf.intel.com>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
pull/25828/head
rbrugaro 3 weeks ago committed by GitHub
parent 8fb594fd2a
commit 9fa172bc26
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194

@ -210,6 +210,13 @@
")\n",
"llm(\"What did foo say about bar?\", callbacks=[StreamingStdOutCallbackHandler()])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This same `HuggingFaceEndpoint` class can be used with a local [HuggingFace TGI instance](https://github.com/huggingface/text-generation-inference/blob/main/docs/source/index.md) serving the LLM. Check out the TGI [repository](https://github.com/huggingface/text-generation-inference/tree/main) for details on various hardware (GPU, TPU, Gaudi...) support."
]
}
],
"metadata": {

@ -39,7 +39,9 @@
"volume=$PWD/data # share a volume with the Docker container to avoid downloading weights every run\n",
"\n",
"docker run --gpus all -p 8080:80 -v $volume:/data --pull always ghcr.io/huggingface/text-embeddings-inference:0.6 --model-id $model --revision $revision\n",
"```"
"```\n",
"\n",
"Specifics on Docker usage might vary with the underlying hardware. For example, to serve the model on Intel Gaudi/Gaudi2 hardware, refer to the [tei-gaudi repository](https://github.com/huggingface/tei-gaudi) for the relevant docker run command."
]
},
{

Loading…
Cancel
Save