From 1f40d3e0944604581f76a48fdfc961ac3017c0f2 Mon Sep 17 00:00:00 2001 From: William FH <13333726+hinthornw@users.noreply.github.com> Date: Tue, 25 Jul 2023 12:26:39 -0700 Subject: [PATCH] Update Broken Links (#8247) --- docs/docs_skeleton/docs/guides/evaluation/index.mdx | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/docs/docs_skeleton/docs/guides/evaluation/index.mdx b/docs/docs_skeleton/docs/guides/evaluation/index.mdx index 51c1a94942..78227519bf 100644 --- a/docs/docs_skeleton/docs/guides/evaluation/index.mdx +++ b/docs/docs_skeleton/docs/guides/evaluation/index.mdx @@ -11,14 +11,14 @@ Language models can be unpredictable. This makes it challenging to ship reliable LangChain exposes different types of evaluators for common types of evaluation. Each type has off-the-shelf implementations you can use to get started, as well as an extensible API so you can create your own or contribute improvements for everyone to use. The following sections have example notebooks for you to get started. -- [String Evaluators](/docs/modules/evaluation/string/): Evaluate the predicted string for a given input, usually against a reference string -- [Trajectory Evaluators](/docs/modules/evaluation/trajectory/): Evaluate the whole trajectory of agent actions -- [Comparison Evaluators](/docs/modules/evaluation/comparison/): Compare predictions from two runs on a common input +- [String Evaluators](/docs/guides/evaluation/string/): Evaluate the predicted string for a given input, usually against a reference string +- [Trajectory Evaluators](/docs/guides/evaluation/trajectory/): Evaluate the whole trajectory of agent actions +- [Comparison Evaluators](/docs/guides/evaluation/comparison/): Compare predictions from two runs on a common input This section also provides some additional examples of how you could use these evaluators for different scenarios or apply to different chain implementations in the LangChain library. Some examples include: -- [Preference Scoring Chain Outputs](/docs/modules/evaluation/examples/comparisons): An example using a comparison evaluator on different models or prompts to select statistically significant differences in aggregate preference scores +- [Preference Scoring Chain Outputs](/docs/guides/evaluation/examples/comparisons): An example using a comparison evaluator on different models or prompts to select statistically significant differences in aggregate preference scores ## Reference Docs