You cannot select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
langchain/docs/docs/guides/productionization/evaluation/index.mdx

44 lines
3.8 KiB
Markdown

import DocCardList from "@theme/DocCardList";
# Evaluation
Building applications with language models involves many moving parts. One of the most critical components is ensuring that the outcomes produced by your models are reliable and useful across a broad array of inputs, and that they work well with your application's other software components. Ensuring reliability usually boils down to some combination of application design, testing & evaluation, and runtime checks.
The guides in this section review the APIs and functionality LangChain provides to help you better evaluate your applications. Evaluation and testing are both critical when thinking about deploying LLM applications, since production environments require repeatable and useful outcomes.
LangChain offers various types of evaluators to help you measure performance and integrity on diverse data, and we hope to encourage the community to create and share other useful evaluators so everyone can improve. These docs will introduce the evaluator types, how to use them, and provide some examples of their use in real-world scenarios.
These built-in evaluators all integrate smoothly with [LangSmith](/docs/langsmith), and allow you to create feedback loops that improve your application over time and prevent regressions.
Each evaluator type in LangChain comes with ready-to-use implementations and an extensible API that allows for customization according to your unique requirements. Here are some of the types of evaluators we offer:
- [String Evaluators](/docs/guides/productionization/evaluation/string/): These evaluators assess the predicted string for a given input, usually comparing it against a reference string.
- [Trajectory Evaluators](/docs/guides/productionization/evaluation/trajectory/): These are used to evaluate the entire trajectory of agent actions.
- [Comparison Evaluators](/docs/guides/productionization/evaluation/comparison/): These evaluators are designed to compare predictions from two runs on a common input.
These evaluators can be used across various scenarios and can be applied to different chain and LLM implementations in the LangChain library.
We also are working to share guides and cookbooks that demonstrate how to use these evaluators in real-world scenarios, such as:
- [Chain Comparisons](/docs/guides/productionization/evaluation/examples/comparisons): This example uses a comparison evaluator to predict the preferred output. It reviews ways to measure confidence intervals to select statistically significant differences in aggregate preference scores across different models or prompts.
## LangSmith Evaluation
LangSmith provides an integrated evaluation and tracing framework that allows you to check for regressions, compare systems, and easily identify and fix any sources of errors and performance issues. Check out the docs on [LangSmith Evaluation](https://docs.smith.langchain.com/evaluation) and additional [cookbooks](https://docs.smith.langchain.com/cookbook) for more detailed information on evaluating your applications.
## LangChain benchmarks
Your application quality is a function both of the LLM you choose and the prompting and data retrieval strategies you employ to provide model contexet. We have published a number of benchmark tasks within the [LangChain Benchmarks](https://langchain-ai.github.io/langchain-benchmarks/) package to grade different LLM systems on tasks such as:
- Agent tool use
- Retrieval-augmented question-answering
- Structured Extraction
Check out the docs for examples and leaderboard information.
## Reference Docs
For detailed information on the available evaluators, including how to instantiate, configure, and customize them, check out the [reference documentation](https://api.python.langchain.com/en/latest/langchain_api_reference.html#module-langchain.evaluation) directly.
<DocCardList />