diff --git a/docs/docs/expression_language/cookbook/prompt_llm_parser.ipynb b/docs/docs/expression_language/cookbook/prompt_llm_parser.ipynb index 2449408e02..b7021734d4 100644 --- a/docs/docs/expression_language/cookbook/prompt_llm_parser.ipynb +++ b/docs/docs/expression_language/cookbook/prompt_llm_parser.ipynb @@ -30,7 +30,7 @@ "source": [ "## PromptTemplate + LLM\n", "\n", - "The simplest composition is just combing a prompt and model to create a chain that takes user input, adds it to a prompt, passes it to a model, and returns the raw model input.\n", + "The simplest composition is just combing a prompt and model to create a chain that takes user input, adds it to a prompt, passes it to a model, and returns the raw model output.\n", "\n", "Note, you can mix and match PromptTemplate/ChatPromptTemplates and LLMs/ChatModels as you like here." ] @@ -76,7 +76,7 @@ "id": "7eb9ef50", "metadata": {}, "source": [ - "Often times we want to attach kwargs that'll be passed to each model call. Here's a few examples of that:" + "Often times we want to attach kwargs that'll be passed to each model call. Here are a few examples of that:" ] }, { diff --git a/docs/docs/guides/debugging.md b/docs/docs/guides/debugging.md index ac3eec7b05..65528a5645 100644 --- a/docs/docs/guides/debugging.md +++ b/docs/docs/guides/debugging.md @@ -376,7 +376,7 @@ agent.run("Who directed the 2023 film Oppenheimer and what is their age? What is -### `set_vebose(True)` +### `set_verbose(True)` Setting the `verbose` flag will print out inputs and outputs in a slightly more readable format and will skip logging certain raw outputs (like the token usage stats for an LLM call) so that you can focus on application logic. @@ -656,6 +656,6 @@ agent.run("Who directed the 2023 film Oppenheimer and what is their age? What is ## Other callbacks -`Callbacks` are what we use to execute any functionality within a component outside the primary component logic. All of the above solutions use `Callbacks` under the hood to log intermediate steps of components. There's a number of `Callbacks` relevant for debugging that come with LangChain out of the box, like the [FileCallbackHandler](/docs/modules/callbacks/how_to/filecallbackhandler). You can also implement your own callbacks to execute custom functionality. +`Callbacks` are what we use to execute any functionality within a component outside the primary component logic. All of the above solutions use `Callbacks` under the hood to log intermediate steps of components. There are a number of `Callbacks` relevant for debugging that come with LangChain out of the box, like the [FileCallbackHandler](/docs/modules/callbacks/how_to/filecallbackhandler). You can also implement your own callbacks to execute custom functionality. See here for more info on [Callbacks](/docs/modules/callbacks/), how to use them, and customize them. diff --git a/docs/docs/guides/deployments/index.mdx b/docs/docs/guides/deployments/index.mdx index 8299aa0243..92bf636414 100644 --- a/docs/docs/guides/deployments/index.mdx +++ b/docs/docs/guides/deployments/index.mdx @@ -1,6 +1,6 @@ # Deployment -In today's fast-paced technological landscape, the use of Large Language Models (LLMs) is rapidly expanding. As a result, it's crucial for developers to understand how to effectively deploy these models in production environments. LLM interfaces typically fall into two categories: +In today's fast-paced technological landscape, the use of Large Language Models (LLMs) is rapidly expanding. As a result, it is crucial for developers to understand how to effectively deploy these models in production environments. LLM interfaces typically fall into two categories: - **Case 1: Utilizing External LLM Providers (OpenAI, Anthropic, etc.)** In this scenario, most of the computational burden is handled by the LLM providers, while LangChain simplifies the implementation of business logic around these services. This approach includes features such as prompt templating, chat message generation, caching, vector embedding database creation, preprocessing, etc. diff --git a/libs/experimental/langchain_experimental/comprehend_moderation/base_moderation.py b/libs/experimental/langchain_experimental/comprehend_moderation/base_moderation.py index 1005c183a8..9b97a42a00 100644 --- a/libs/experimental/langchain_experimental/comprehend_moderation/base_moderation.py +++ b/libs/experimental/langchain_experimental/comprehend_moderation/base_moderation.py @@ -61,7 +61,7 @@ class BaseModeration: input_text = message.content else: raise ValueError( - f"Invalid input type {type(input)}. " + f"Invalid input type {type(input_text)}. " "Must be a PromptValue, str, or list of BaseMessages." ) return input_text diff --git a/libs/langchain/langchain/llms/baidu_qianfan_endpoint.py b/libs/langchain/langchain/llms/baidu_qianfan_endpoint.py index 95914fcbe5..53b79085e5 100644 --- a/libs/langchain/langchain/llms/baidu_qianfan_endpoint.py +++ b/libs/langchain/langchain/llms/baidu_qianfan_endpoint.py @@ -96,7 +96,7 @@ class QianfanLLMEndpoint(LLM): values["client"] = qianfan.Completion(**params) except ImportError: - raise ValueError( + raise ImportError( "qianfan package not found, please install it with " "`pip install qianfan`" ) diff --git a/libs/langchain/langchain/llms/databricks.py b/libs/langchain/langchain/llms/databricks.py index fb4a3674b3..6488244ff4 100644 --- a/libs/langchain/langchain/llms/databricks.py +++ b/libs/langchain/langchain/llms/databricks.py @@ -92,7 +92,7 @@ def get_repl_context() -> Any: return get_context() except ImportError: - raise ValueError( + raise ImportError( "Cannot access dbruntime, not running inside a Databricks notebook." )