docs: update openvino documents (#19543)

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
pull/19591/head^2
Ethan Yang 3 months ago committed by GitHub
parent bf8ba00520
commit 5784dfed00
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194

@ -330,7 +330,7 @@
"id": "da9a9239",
"metadata": {},
"source": [
"For more information refer to [OpenVINO LLM guide](https://docs.openvino.ai/2024/openvino-workflow/generative-ai-models-guide.html)."
"For more information refer to [OpenVINO LLM guide](https://docs.openvino.ai/2024/learn-openvino/llm_inference_guide.html) and [OpenVINO Local Pipelines notebook](./openvino.ipynb)."
]
}
],

@ -7,7 +7,7 @@
"source": [
"# OpenVINO Local Pipelines\n",
"\n",
"[OpenVINO™](https://github.com/openvinotoolkit/openvino) is an open-source toolkit for optimizing and deploying AI inference. The OpenVINO™ Runtime can infer models on different hardware [devices](https://github.com/openvinotoolkit/openvino?tab=readme-ov-file#supported-hardware-matrix). It can help to boost deep learning performance in computer vision, automatic speech recognition, natural language processing and other common tasks.\n",
"[OpenVINO™](https://github.com/openvinotoolkit/openvino) is an open-source toolkit for optimizing and deploying AI inference. OpenVINO™ Runtime can enable running the same model optimized across various hardware [devices](https://github.com/openvinotoolkit/openvino?tab=readme-ov-file#supported-hardware-matrix). Accelerate your deep learning performance across use cases like: language + LLMs, computer vision, automatic speech recognition, and more.\n",
"\n",
"OpenVINO models can be run locally through the `HuggingFacePipeline` [class](https://python.langchain.com/docs/integrations/llms/huggingface_pipeline). To deploy a model with OpenVINO, you can specify the `backend=\"openvino\"` parameter to trigger OpenVINO as backend inference framework."
]
@ -73,7 +73,7 @@
"id": "00104b27-0c15-4a97-b198-4512337ee211",
"metadata": {},
"source": [
"They can also be loaded by passing in an existing `optimum-intel` pipeline directly"
"They can also be loaded by passing in an existing [`optimum-intel`](https://huggingface.co/docs/optimum/main/en/intel/inference) pipeline directly"
]
},
{
@ -221,7 +221,15 @@
"id": "da9a9239",
"metadata": {},
"source": [
"For more information refer to [OpenVINO LLM guide](https://docs.openvino.ai/2024/openvino-workflow/generative-ai-models-guide.html)."
"For more information refer to:\n",
"\n",
"* [OpenVINO LLM guide](https://docs.openvino.ai/2024/learn-openvino/llm_inference_guide.html).\n",
"\n",
"* [OpenVINO Documentation](https://docs.openvino.ai/2024/home.html).\n",
"\n",
"* [OpenVINO Get Started Guide](https://www.intel.com/content/www/us/en/content-details/819067/openvino-get-started-guide.html).\n",
" \n",
"* [RAG Notebook with LangChain](https://github.com/openvinotoolkit/openvino_notebooks/tree/master/notebooks/llm-chatbot)."
]
}
],

Loading…
Cancel
Save