diff --git a/docs/docs/integrations/providers/huggingface.mdx b/docs/docs/integrations/providers/huggingface.mdx index a752a1b577..27fe4d42db 100644 --- a/docs/docs/integrations/providers/huggingface.mdx +++ b/docs/docs/integrations/providers/huggingface.mdx @@ -47,7 +47,7 @@ To use a the wrapper for a model hosted on Hugging Face Hub: ```python from langchain.embeddings import HuggingFaceHubEmbeddings ``` -For a more detailed walkthrough of this, see [this notebook](/docs/integrations/text_embedding/huggingfacehub.html) +For a more detailed walkthrough of this, see [this notebook](/docs/integrations/text_embedding/huggingfacehub) ### Tokenizer @@ -59,11 +59,11 @@ You can also use it to count tokens when splitting documents with from langchain.text_splitter import CharacterTextSplitter CharacterTextSplitter.from_huggingface_tokenizer(...) ``` -For a more detailed walkthrough of this, see [this notebook](/docs/modules/data_connection/document_transformers/text_splitters/huggingface_length_function.html) +For a more detailed walkthrough of this, see [this notebook](/docs/modules/data_connection/document_transformers/text_splitters/huggingface_length_function) ### Datasets The Hugging Face Hub has lots of great [datasets](https://huggingface.co/datasets) that can be used to evaluate your LLM chains. -For a detailed walkthrough of how to use them to do so, see [this notebook](/docs/use_cases/evaluation/huggingface_datasets.html) +For a detailed walkthrough of how to use them to do so, see [this notebook](/docs/integrations/document_loaders/hugging_face_dataset)