mirror of
https://github.com/hwchase17/langchain
synced 2024-10-31 15:20:26 +00:00
6ad6bb46c4
Description: Adding `DeepEval` - which provides an opinionated framework for testing and evaluating LLMs Issue: Missing Deepeval Dependencies: Optional DeepEval dependency Tag maintainer: @baskaryan (not 100% sure) Twitter handle: https://twitter.com/ColabDog
23 lines
664 B
Plaintext
23 lines
664 B
Plaintext
# Confident AI
|
|
|
|
![Confident - Unit Testing for LLMs](https://github.com/confident-ai/deepeval)
|
|
|
|
>[DeepEval](https://confident-ai.com) package for unit testing LLMs.
|
|
> Using Confident, everyone can build robust language models through faster iterations
|
|
> using both unit testing and integration testing. We provide support for each step in the iteration
|
|
> from synthetic data creation to testing.
|
|
|
|
## Installation and Setup
|
|
|
|
First, you'll need to install the `DeepEval` Python package as follows:
|
|
|
|
```bash
|
|
pip install deepeval
|
|
```
|
|
|
|
Afterwards, you can get started in as little as a few lines of code.
|
|
|
|
```python
|
|
from langchain.callbacks import DeepEvalCallback
|
|
```
|