forked from Archives/langchain
37a89918e0
### Integration of Infino with LangChain for Enhanced Observability This PR aims to integrate [Infino](https://github.com/infinohq/infino), an open source observability platform written in rust for storing metrics and logs at scale, with LangChain, providing users with a streamlined and efficient method of tracking and recording LangChain experiments. By incorporating Infino into LangChain, users will be able to gain valuable insights and easily analyze the behavior of their language models. #### Please refer to the following files related to integration: - `InfinoCallbackHandler`: A [callback handler](https://github.com/naman-modi/langchain/blob/feature/infino-integration/langchain/callbacks/infino_callback.py) specifically designed for storing chain responses within Infino. - Example `infino.ipynb` file: A comprehensive notebook named [infino.ipynb](https://github.com/naman-modi/langchain/blob/feature/infino-integration/docs/extras/modules/callbacks/integrations/infino.ipynb) has been included to guide users on effectively leveraging Infino for tracking LangChain requests. - [Integration Doc](https://github.com/naman-modi/langchain/blob/feature/infino-integration/docs/extras/ecosystem/integrations/infino.mdx) for Infino integration. By integrating Infino, LangChain users will gain access to powerful visualization and debugging capabilities. Infino enables easy tracking of inputs, outputs, token usage, execution time of LLMs. This comprehensive observability ensures a deeper understanding of individual executions and facilitates effective debugging. Co-authors: @vinaykakade @savannahar68 --------- Co-authored-by: Vinay Kakade <vinaykakade@gmail.com>
36 lines
1.2 KiB
Plaintext
36 lines
1.2 KiB
Plaintext
# Infino
|
|
|
|
>[Infino](https://github.com/infinohq/infino) is an open-source observability platform that stores both metrics and application logs together.
|
|
|
|
Key features of infino include:
|
|
- Metrics Tracking: Capture time taken by LLM model to handle request, errors, number of tokens, and costing indication for the particular LLM.
|
|
- Data Tracking: Log and store prompt, request, and response data for each LangChain interaction.
|
|
- Graph Visualization: Generate basic graphs over time, depicting metrics such as request duration, error occurrences, token count, and cost.
|
|
|
|
## Installation and Setup
|
|
|
|
First, you'll need to install the `infinopy` Python package as follows:
|
|
|
|
```bash
|
|
pip install infinopy
|
|
```
|
|
|
|
If you already have an Infino Server running, then you're good to go; but if
|
|
you don't, follow the next steps to start it:
|
|
|
|
- Make sure you have Docker installed
|
|
- Run the following in your terminal:
|
|
```
|
|
docker run --rm --detach --name infino-example -p 3000:3000 infinohq/infino:latest
|
|
```
|
|
|
|
|
|
|
|
## Using Infino
|
|
|
|
See a [usage example of `InfinoCallbackHandler`](/docs/modules/callbacks/integrations/infino.html).
|
|
|
|
```python
|
|
from langchain.callbacks import InfinoCallbackHandler
|
|
```
|