From 8a320e55a07069bdd7b0a029bc6c23a5bf6c504f Mon Sep 17 00:00:00 2001 From: Aashish Saini Date: Thu, 10 Aug 2023 22:47:09 +0530 Subject: [PATCH] Corrected grammatical errors and spelling mistakes in the index.mdx file. (#9026) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Expressing gratitude to the creator for crafting this remarkable application. 🙌, Would like to Enhance grammar and spelling in the documentation for a polished reader experience. Your feedback is valuable as always @baskaryan , @hwchase17 , @eyurtsev --- docs/docs_skeleton/docs/guides/evaluation/index.mdx | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/docs_skeleton/docs/guides/evaluation/index.mdx b/docs/docs_skeleton/docs/guides/evaluation/index.mdx index a608527bed..e2d04b1228 100644 --- a/docs/docs_skeleton/docs/guides/evaluation/index.mdx +++ b/docs/docs_skeleton/docs/guides/evaluation/index.mdx @@ -8,9 +8,9 @@ import DocCardList from "@theme/DocCardList"; Building applications with language models involves many moving parts. One of the most critical components is ensuring that the outcomes produced by your models are reliable and useful across a broad array of inputs, and that they work well with your application's other software components. Ensuring reliability usually boils down to some combination of application design, testing & evaluation, and runtime checks. -The guides in this section review the APIs and functionality LangChain provides to help yous better evaluate your applications. Evaluation and testing are both critical when thinking about deploying LLM applications, since production environments require repeatable and useful outcomes. +The guides in this section review the APIs and functionality LangChain provides to help you better evaluate your applications. Evaluation and testing are both critical when thinking about deploying LLM applications, since production environments require repeatable and useful outcomes. -LangChain offers various types of evaluators to help you measure performance and integrity on diverse data, and we hope to encourage the the community to create and share other useful evaluators so everyone can improve. These docs will introduce the evaluator types, how to use them, and provide some examples of their use in real-world scenarios. +LangChain offers various types of evaluators to help you measure performance and integrity on diverse data, and we hope to encourage the community to create and share other useful evaluators so everyone can improve. These docs will introduce the evaluator types, how to use them, and provide some examples of their use in real-world scenarios. Each evaluator type in LangChain comes with ready-to-use implementations and an extensible API that allows for customization according to your unique requirements. Here are some of the types of evaluators we offer: