From 30d8d1d3d0fce84108c51f7b9862f895ef84e1f4 Mon Sep 17 00:00:00 2001 From: Josh Reini <60949774+joshreini1@users.noreply.github.com> Date: Wed, 5 Jul 2023 14:04:55 -0400 Subject: [PATCH] add trulens integration (#7096) Description: Add TruLens integration. Twitter: @trulensml For review: - Tracing: @agola11 - Tools: @hinthornw --- .../extras/ecosystem/integrations/trulens.mdx | 56 +++++++++++++++++++ 1 file changed, 56 insertions(+) create mode 100644 docs/extras/ecosystem/integrations/trulens.mdx diff --git a/docs/extras/ecosystem/integrations/trulens.mdx b/docs/extras/ecosystem/integrations/trulens.mdx new file mode 100644 index 0000000000..8748d19b44 --- /dev/null +++ b/docs/extras/ecosystem/integrations/trulens.mdx @@ -0,0 +1,56 @@ +# TruLens + +This page covers how to use [TruLens](https://trulens.org) to evaluate and track LLM apps built on langchain. + +## What is TruLens? + +TruLens is an [opensource](https://github.com/truera/trulens) package that provides instrumentation and evaluation tools for large language model (LLM) based applications. + +## Quick start + +Once you've created your LLM chain, you can use TruLens for evaluation and tracking. TruLens has a number of [out-of-the-box Feedback Functions](https://www.trulens.org/trulens_eval/feedback_functions/), and is also an extensible framework for LLM evaluation. + +```python +# create a feedback function + +from trulens_eval.feedback import Feedback, Huggingface, OpenAI +# Initialize HuggingFace-based feedback function collection class: +hugs = Huggingface() +openai = OpenAI() + +# Define a language match feedback function using HuggingFace. +lang_match = Feedback(hugs.language_match).on_input_output() +# By default this will check language match on the main app input and main app +# output. + +# Question/answer relevance between overall question and answer. +qa_relevance = Feedback(openai.relevance).on_input_output() +# By default this will evaluate feedback on main app input and main app output. + +# Toxicity of input +toxicity = Feedback(openai.toxicity).on_input() + +``` + +After you've set up Feedback Function(s) for evaluating your LLM, you can wrap your application with TruChain to get detailed tracing, logging and evaluation of your LLM app. + +```python +# wrap your chain with TruChain +truchain = TruChain( + chain, + app_id='Chain1_ChatApplication', + feedbacks=[lang_match, qa_relevance, toxicity] +) +# Note: any `feedbacks` specified here will be evaluated and logged whenever the chain is used. +truchain("que hora es?") +``` + +Now you can explore your LLM-based application! + +Doing so will help you understand how your LLM application is performing at a glance. As you iterate new versions of your LLM application, you can compare their performance across all of the different quality metrics you've set up. You'll also be able to view evaluations at a record level, and explore the chain metadata for each record. + +```python +tru.run_dashboard() # open a Streamlit app to explore +``` + +For more information on TruLens, visit [trulens.org](https://www.trulens.org/) \ No newline at end of file