diff --git a/docs/extras/guides/local_llms.ipynb b/docs/extras/guides/local_llms.ipynb index 91ace2eaf4..90c3232a2b 100644 --- a/docs/extras/guides/local_llms.ipynb +++ b/docs/extras/guides/local_llms.ipynb @@ -146,7 +146,7 @@ "source": [ "## Environment\n", "\n", - "Inference speed is a chllenge when running models locally (see above).\n", + "Inference speed is a challenge when running models locally (see above).\n", "\n", "To minimize latency, it is desiable to run models locally on GPU, which ships with many consumer laptops [e.g., Apple devices](https://www.apple.com/newsroom/2022/06/apple-unveils-m2-with-breakthrough-performance-and-capabilities/).\n", "\n",