diff --git a/README.md b/README.md index aa5ffe30..6c4fcf58 100644 --- a/README.md +++ b/README.md @@ -42,7 +42,7 @@ Please see [here](https://langchain.readthedocs.io/en/latest/?) for full documen - Getting started (installation, setting up the environment, simple examples) - How-To examples (demos, integrations, helper functions) - Reference (full API docs) - Resources (high-level explanation of core concepts) +- Resources (high-level explanation of core concepts) ## 🚀 What can this help with? diff --git a/docs/modules/indexes/examples/textsplitter.ipynb b/docs/modules/indexes/examples/textsplitter.ipynb index ac64c9ba..85cd9eda 100644 --- a/docs/modules/indexes/examples/textsplitter.ipynb +++ b/docs/modules/indexes/examples/textsplitter.ipynb @@ -43,7 +43,7 @@ "metadata": {}, "source": [ "## Generic Recursive Text Splitting\n", - "This text splitter is the recommended one for generic text. It is parameterized by a list of characters. It tries to split on them in order until the chunks are small enough. The default list is `[\"\\n\\n\", \"\\n\", \" \", \"\"]`. This has the affect of trying to keep all paragraphs (and then sentences, and then words) together as long as possible, as those would generically seem to be the strongest semantically related pieces of text.\n", + "This text splitter is the recommended one for generic text. It is parameterized by a list of characters. It tries to split on them in order until the chunks are small enough. The default list is `[\"\\n\\n\", \"\\n\", \" \", \"\"]`. This has the effect of trying to keep all paragraphs (and then sentences, and then words) together as long as possible, as those would generically seem to be the strongest semantically related pieces of text.\n", "\n", "\n", "1. How the text is split: by list of characters\n", diff --git a/docs/modules/indexes/examples/vectorstores.ipynb b/docs/modules/indexes/examples/vectorstores.ipynb index 8f1191ba..ee30111d 100644 --- a/docs/modules/indexes/examples/vectorstores.ipynb +++ b/docs/modules/indexes/examples/vectorstores.ipynb @@ -11,7 +11,7 @@ "source": [ "# VectorStores\n", "\n", - "This notebook showcases basic functionality related to VectorStores. A key part of working with vectorstores is creating the vector to put in them, which is usually created via embeddings. Therefor, it is recommended that you familiarize yourself with the [embedding notebook](embeddings.ipynb) before diving into this.\n", + "This notebook showcases basic functionality related to VectorStores. A key part of working with vectorstores is creating the vector to put in them, which is usually created via embeddings. Therefore, it is recommended that you familiarize yourself with the [embedding notebook](embeddings.ipynb) before diving into this.\n", "\n", "This covers generic high level functionality related to all vector stores. For guides on specific vectorstores, please see the how-to guides [here](../how_to_guides.rst)" ] diff --git a/docs/modules/llms/examples/llm_serialization.ipynb b/docs/modules/llms/examples/llm_serialization.ipynb index 660cf9e7..da64b197 100644 --- a/docs/modules/llms/examples/llm_serialization.ipynb +++ b/docs/modules/llms/examples/llm_serialization.ipynb @@ -7,7 +7,7 @@ "source": [ "# LLM Serialization\n", "\n", - "This notebook walks how to write and read an LLM Configuration to and from disk. This is useful if you want to save the configuration for a given LLM (eg the provider, the temperature, etc)." + "This notebook walks through how to write and read an LLM Configuration to and from disk. This is useful if you want to save the configuration for a given LLM (e.g., the provider, the temperature, etc)." ] }, { diff --git a/docs/modules/llms/integrations.rst b/docs/modules/llms/integrations.rst index deb79be4..55056431 100644 --- a/docs/modules/llms/integrations.rst +++ b/docs/modules/llms/integrations.rst @@ -31,13 +31,13 @@ The examples here are all "how-to" guides for how to integrate with various LLM `Forefront AI <./integrations/forefrontai_example.html>`_: Covers how to utilize the Forefront AI wrapper. -`PromptLayer OpenAI <./integrations/promptlayer_openai.html>`_: Covers how to use `PromptLayer `_ with Langchain. +`PromptLayer OpenAI <./integrations/promptlayer_openai.html>`_: Covers how to use `PromptLayer `_ with LangChain. -`Anthropic <./integrations/anthropic_example.html>`_: Covers how to use Anthropic models with Langchain. +`Anthropic <./integrations/anthropic_example.html>`_: Covers how to use Anthropic models with LangChain. `DeepInfra <./integrations/deepinfra_example.html>`_: Covers how to utilize the DeepInfra wrapper. -`Self-Hosted Models (via Runhouse) <./integrations/self_hosted_examples.html>`_: Covers how to run models on existing or on-demand remote compute with Langchain. +`Self-Hosted Models (via Runhouse) <./integrations/self_hosted_examples.html>`_: Covers how to run models on existing or on-demand remote compute with LangChain. .. toctree:: diff --git a/docs/modules/llms/key_concepts.md b/docs/modules/llms/key_concepts.md index 672db571..47512248 100644 --- a/docs/modules/llms/key_concepts.md +++ b/docs/modules/llms/key_concepts.md @@ -2,9 +2,9 @@ ## LLMs Wrappers around Large Language Models (in particular, the "generate" ability of large language models) are at the core of LangChain functionality. -The core method that these classes expose is a `generate` method, which takes in a list of strings and returns an LLMResult (which contains outputs for all input strings). -Read more about LLMResult. This interface operates over a list of strings because often the lists of strings can be batched to the LLM provider, -providing speed and efficiency gains. +The core method that these classes expose is a `generate` method, which takes in a list of strings and returns an LLMResult (which contains outputs for all input strings). Read more about [LLMResult](#llmresult). + +This interface operates over a list of strings because often the lists of strings can be batched to the LLM provider, providing speed and efficiency gains. For convenience, this class also exposes a simpler, more user friendly interface (via `__call__`). The interface for this takes in a single string, and returns a single string. diff --git a/docs/modules/prompts/examples/custom_example_selector.md b/docs/modules/prompts/examples/custom_example_selector.md index da9b648c..41b8e788 100644 --- a/docs/modules/prompts/examples/custom_example_selector.md +++ b/docs/modules/prompts/examples/custom_example_selector.md @@ -1,6 +1,6 @@ # Create a custom example selector -In this tutorial, we'll create a custom example selector that selects examples every alternate example given a list of examples. +In this tutorial, we'll create a custom example selector that selects every alternate example from a given list of examples. An `ExampleSelector` must implement two methods: @@ -65,4 +65,4 @@ example_selector.examples # Select examples example_selector.select_examples({"foo": "foo"}) # -> array([{'foo': '1'}, {'foo': '4'}], dtype=object) -``` \ No newline at end of file +``` diff --git a/docs/modules/prompts/getting_started.md b/docs/modules/prompts/getting_started.md index 3093e3f8..1a20ae78 100644 --- a/docs/modules/prompts/getting_started.md +++ b/docs/modules/prompts/getting_started.md @@ -8,7 +8,7 @@ In this tutorial, we will learn about: ## What is a prompt template? -A prompt template refers to a reproducible way to generate a prompt. It contains a text string ("the template"), that can can take in a set of parameters from the end user and generate a prompt. +A prompt template refers to a reproducible way to generate a prompt. It contains a text string ("the template"), that can take in a set of parameters from the end user and generate a prompt. The prompt template may contain: - instructions to the language model, diff --git a/docs/modules/prompts/key_concepts.md b/docs/modules/prompts/key_concepts.md index eb5f77f8..8ecb0340 100644 --- a/docs/modules/prompts/key_concepts.md +++ b/docs/modules/prompts/key_concepts.md @@ -62,7 +62,7 @@ To learn more about how to provide few shot examples, see [Few Shot Examples](ex If there are multiple examples that are relevant to a prompt, it is important to select the most relevant examples. Generally, the quality of the response from the LLM can be significantly improved by selecting the most relevant examples. This is because the language model will be able to better understand the context of the prompt, and also potentially learn failure modes to avoid. -To help the user with selecting the most relevant examples, we provide example selectors that select the most relevant based on different criteria, such as length, semantic similarity, etc. The example selector takes in a list of examples and returns a list of selected examples, formatted as a string. The user can also provide their own example selector. To learn more about example selectors, see [Example Selection](example_selection.md). +To help the user with selecting the most relevant examples, we provide example selectors that select the most relevant based on different criteria, such as length, semantic similarity, etc. The example selector takes in a list of examples and returns a list of selected examples, formatted as a string. The user can also provide their own example selector. To learn more about example selectors, see [Example Selectors](examples/example_selectors.ipynb).