mirror of https://github.com/hwchase17/langchain
Readme rewrite (#12615)
Co-authored-by: Lance Martin <lance@langchain.dev> Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>pull/12622/head
parent
00766c9f31
commit
a1fae1fddd
@ -1,3 +1,69 @@
|
||||
|
||||
# anthropic-iterative-search
|
||||
|
||||
Heavily inspired by [this notebook](https://github.com/anthropics/anthropic-cookbook/blob/main/long_context/wikipedia-search-cookbook.ipynb)
|
||||
This template will create a virtual research assistant with the ability to search Wikipedia to find answers to your questions.
|
||||
|
||||
It is heavily inspired by [this notebook](https://github.com/anthropics/anthropic-cookbook/blob/main/long_context/wikipedia-search-cookbook.ipynb).
|
||||
|
||||
## Environment Setup
|
||||
|
||||
Set the `ANTHROPIC_API_KEY` environment variable to access the Anthropic models.
|
||||
|
||||
## Usage
|
||||
|
||||
To use this package, you should first have the LangChain CLI installed:
|
||||
|
||||
```shell
|
||||
pip install -U "langchain-cli[serve]"
|
||||
```
|
||||
|
||||
To create a new LangChain project and install this as the only package, you can do:
|
||||
|
||||
```shell
|
||||
langchain app new my-app --package anthropic-iterative-search
|
||||
```
|
||||
|
||||
If you want to add this to an existing project, you can just run:
|
||||
|
||||
```shell
|
||||
langchain app add anthropic-iterative-search
|
||||
```
|
||||
|
||||
And add the following code to your `server.py` file:
|
||||
```python
|
||||
from anthropic_iterative_search import chain as anthropic_iterative_search_chain
|
||||
|
||||
add_routes(app, anthropic_iterative_search_chain, path="/anthropic-iterative-search")
|
||||
```
|
||||
|
||||
(Optional) Let's now configure LangSmith.
|
||||
LangSmith will help us trace, monitor and debug LangChain applications.
|
||||
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
|
||||
If you don't have access, you can skip this section
|
||||
|
||||
|
||||
```shell
|
||||
export LANGCHAIN_TRACING_V2=true
|
||||
export LANGCHAIN_API_KEY=<your-api-key>
|
||||
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
|
||||
```
|
||||
|
||||
If you are inside this directory, then you can spin up a LangServe instance directly by:
|
||||
|
||||
```shell
|
||||
langchain serve
|
||||
```
|
||||
|
||||
This will start the FastAPI app with a server is running locally at
|
||||
[http://localhost:8000](http://localhost:8000)
|
||||
|
||||
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
|
||||
We can access the playground at [http://127.0.0.1:8000/anthropic-iterative-search/playground](http://127.0.0.1:8000/anthropic-iterative-search/playground)
|
||||
|
||||
We can access the template from code with:
|
||||
|
||||
```python
|
||||
from langserve.client import RemoteRunnable
|
||||
|
||||
runnable = RemoteRunnable("http://localhost:8000/anthropic-iterative-search")
|
||||
```
|
@ -1,5 +1,69 @@
|
||||
|
||||
# csv-agent
|
||||
|
||||
This is a csv agent that uses both a Python REPL as well as a vectorstore to allow for interaction with text data.
|
||||
This template uses a [csv agent](https://python.langchain.com/docs/integrations/toolkits/csv) with tools (Python REPL) and memory (vectorstore) for interaction (question-answering) with text data.
|
||||
|
||||
## Environment Setup
|
||||
|
||||
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
|
||||
|
||||
To set up the environment, the `ingest.py` script should be run to handle the ingestion into a vectorstore.
|
||||
|
||||
## Usage
|
||||
|
||||
To use this package, you should first have the LangChain CLI installed:
|
||||
|
||||
```shell
|
||||
pip install -U "langchain-cli[serve]"
|
||||
```
|
||||
|
||||
To create a new LangChain project and install this as the only package, you can do:
|
||||
|
||||
```shell
|
||||
langchain app new my-app --package csv-agent
|
||||
```
|
||||
|
||||
If you want to add this to an existing project, you can just run:
|
||||
|
||||
```shell
|
||||
langchain app add csv-agent
|
||||
```
|
||||
|
||||
And add the following code to your `server.py` file:
|
||||
```python
|
||||
from csv_agent.agent import chain as csv_agent_chain
|
||||
|
||||
add_routes(app, csv_agent_chain, path="/csv-agent")
|
||||
```
|
||||
|
||||
(Optional) Let's now configure LangSmith.
|
||||
LangSmith will help us trace, monitor and debug LangChain applications.
|
||||
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
|
||||
If you don't have access, you can skip this section
|
||||
|
||||
|
||||
```shell
|
||||
export LANGCHAIN_TRACING_V2=true
|
||||
export LANGCHAIN_API_KEY=<your-api-key>
|
||||
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
|
||||
```
|
||||
|
||||
If you are inside this directory, then you can spin up a LangServe instance directly by:
|
||||
|
||||
```shell
|
||||
langchain serve
|
||||
```
|
||||
|
||||
This will start the FastAPI app with a server is running locally at
|
||||
[http://localhost:8000](http://localhost:8000)
|
||||
|
||||
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
|
||||
We can access the playground at [http://127.0.0.1:8000/csv-agent/playground](http://127.0.0.1:8000/csv-agent/playground)
|
||||
|
||||
We can access the template from code with:
|
||||
|
||||
```python
|
||||
from langserve.client import RemoteRunnable
|
||||
|
||||
Set up that is required is running `ingest.py` to do the ingestion into a vectorstore.
|
||||
runnable = RemoteRunnable("http://localhost:8000/csv-agent")
|
||||
```
|
||||
|
@ -1,31 +1,86 @@
|
||||
# elastic-query-generator
|
||||
|
||||
We can use LLMs to interact with Elasticsearch analytics databases in natural language.
|
||||
|
||||
This chain builds search queries via the Elasticsearch DSL API (filters and aggregations).
|
||||
|
||||
The Elasticsearch client must have permissions for index listing, mapping description and search queries.
|
||||
# elastic-query-generator
|
||||
|
||||
This template allows interacting with Elasticsearch analytics databases in natural language using LLMs.
|
||||
|
||||
It builds search queries via the Elasticsearch DSL API (filters and aggregations).
|
||||
|
||||
## Setup
|
||||
## Environment Setup
|
||||
|
||||
## Installing Elasticsearch
|
||||
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
|
||||
|
||||
There are a number of ways to run Elasticsearch.
|
||||
### Installing Elasticsearch
|
||||
|
||||
### Elastic Cloud
|
||||
There are a number of ways to run Elasticsearch. However, one recommended way is through Elastic Cloud.
|
||||
|
||||
Create a free trial account on [Elastic Cloud](https://cloud.elastic.co/registration?utm_source=langchain&utm_content=langserve).
|
||||
|
||||
With a deployment, update the connection string.
|
||||
|
||||
Password and connection (elasticsearch url) can be found on the deployment console. Th
|
||||
Password and connection (elasticsearch url) can be found on the deployment console.
|
||||
|
||||
Note that the Elasticsearch client must have permissions for index listing, mapping description, and search queries.
|
||||
|
||||
## Populating with data
|
||||
### Populating with data
|
||||
|
||||
If you want to populate the DB with some example info, you can run `python ingest.py`.
|
||||
|
||||
This will create a `customers` index.
|
||||
In the chain, we specify indexes to generate queries against, and we specify `["customers"]`.
|
||||
This is specific to setting up your Elastic index in this
|
||||
This will create a `customers` index. In this package, we specify indexes to generate queries against, and we specify `["customers"]`. This is specific to setting up your Elastic index.
|
||||
|
||||
## Usage
|
||||
|
||||
To use this package, you should first have the LangChain CLI installed:
|
||||
|
||||
```shell
|
||||
pip install -U "langchain-cli[serve]"
|
||||
```
|
||||
|
||||
To create a new LangChain project and install this as the only package, you can do:
|
||||
|
||||
```shell
|
||||
langchain app new my-app --package elastic-query-generator
|
||||
```
|
||||
|
||||
If you want to add this to an existing project, you can just run:
|
||||
|
||||
```shell
|
||||
langchain app add elastic-query-generator
|
||||
```
|
||||
|
||||
And add the following code to your `server.py` file:
|
||||
```python
|
||||
from elastic_query_generator.chain import chain as elastic_query_generator_chain
|
||||
|
||||
add_routes(app, elastic_query_generator_chain, path="/elastic-query-generator")
|
||||
```
|
||||
|
||||
(Optional) Let's now configure LangSmith.
|
||||
LangSmith will help us trace, monitor and debug LangChain applications.
|
||||
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
|
||||
If you don't have access, you can skip this section
|
||||
|
||||
```shell
|
||||
export LANGCHAIN_TRACING_V2=true
|
||||
export LANGCHAIN_API_KEY=<your-api-key>
|
||||
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
|
||||
```
|
||||
|
||||
If you are inside this directory, then you can spin up a LangServe instance directly by:
|
||||
|
||||
```shell
|
||||
langchain serve
|
||||
```
|
||||
|
||||
This will start the FastAPI app with a server is running locally at
|
||||
[http://localhost:8000](http://localhost:8000)
|
||||
|
||||
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
|
||||
We can access the playground at [http://127.0.0.1:8000/elastic-query-generator/playground](http://127.0.0.1:8000/elastic-query-generator/playground)
|
||||
|
||||
We can access the template from code with:
|
||||
|
||||
```python
|
||||
from langserve.client import RemoteRunnable
|
||||
|
||||
runnable = RemoteRunnable("http://localhost:8000/elastic-query-generator")
|
||||
```
|
||||
|
@ -1,14 +1,75 @@
|
||||
# Extraction with Anthropic Function Calling
|
||||
|
||||
This template enables [Anthropic function calling](https://python.langchain.com/docs/integrations/chat/extraction_anthropic_functions).
|
||||
This is a wrapper around Anthropic's API that uses prompting and output parsing to replicate the OpenAI functions experience.
|
||||
# extraction-anthropic-functions
|
||||
|
||||
Specify the information you want to extract in `chain.py`
|
||||
This template enables [Anthropic function calling](https://python.langchain.com/docs/integrations/chat/anthropic_functions).
|
||||
|
||||
By default, it will extract the title and author of papers.
|
||||
This can be used for various tasks, such as extraction or tagging.
|
||||
|
||||
## LLM
|
||||
The function output schema can be set in `chain.py`.
|
||||
|
||||
This template will use `Claude2` by default.
|
||||
## Environment Setup
|
||||
|
||||
Be sure that `ANTHROPIC_API_KEY` is set in your enviorment.
|
||||
Set the `ANTHROPIC_API_KEY` environment variable to access the Anthropic models.
|
||||
|
||||
## Usage
|
||||
|
||||
To use this package, you should first have the LangChain CLI installed:
|
||||
|
||||
```shell
|
||||
pip install -U "langchain-cli[serve]"
|
||||
```
|
||||
|
||||
To create a new LangChain project and install this as the only package, you can do:
|
||||
|
||||
```shell
|
||||
langchain app new my-app --package extraction-anthropic-functions
|
||||
```
|
||||
|
||||
If you want to add this to an existing project, you can just run:
|
||||
|
||||
```shell
|
||||
langchain app add extraction-anthropic-functions
|
||||
```
|
||||
|
||||
And add the following code to your `server.py` file:
|
||||
```python
|
||||
from extraction_anthropic_functions import chain as extraction_anthropic_functions_chain
|
||||
|
||||
add_routes(app, extraction_anthropic_functions_chain, path="/extraction-anthropic-functions")
|
||||
```
|
||||
|
||||
(Optional) Let's now configure LangSmith.
|
||||
LangSmith will help us trace, monitor and debug LangChain applications.
|
||||
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
|
||||
If you don't have access, you can skip this section
|
||||
|
||||
|
||||
```shell
|
||||
export LANGCHAIN_TRACING_V2=true
|
||||
export LANGCHAIN_API_KEY=<your-api-key>
|
||||
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
|
||||
```
|
||||
|
||||
If you are inside this directory, then you can spin up a LangServe instance directly by:
|
||||
|
||||
```shell
|
||||
langchain serve
|
||||
```
|
||||
|
||||
This will start the FastAPI app with a server is running locally at
|
||||
[http://localhost:8000](http://localhost:8000)
|
||||
|
||||
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
|
||||
We can access the playground at [http://127.0.0.1:8000/extraction-anthropic-functions/playground](http://127.0.0.1:8000/extraction-anthropic-functions/playground)
|
||||
|
||||
We can access the template from code with:
|
||||
|
||||
```python
|
||||
from langserve.client import RemoteRunnable
|
||||
|
||||
runnable = RemoteRunnable("http://localhost:8000/extraction-anthropic-functions")
|
||||
```
|
||||
|
||||
By default, the package will extract the title and author of papers from the information you specify in `chain.py`. This template will use `Claude2` by default.
|
||||
|
||||
---
|
||||
|
@ -1,13 +1,72 @@
|
||||
# Extraction with OpenAI Function Calling
|
||||
|
||||
This template shows how to do extraction of structured data from unstructured data, using OpenAI [function calling](https://python.langchain.com/docs/modules/chains/how_to/openai_functions).
|
||||
# extraction-openai-functions
|
||||
|
||||
Specify the information you want to extract in `chain.py`
|
||||
This template uses [OpenAI function calling](https://python.langchain.com/docs/modules/chains/how_to/openai_functions) for extraction of structured output from unstructured input text.
|
||||
|
||||
By default, it will extract the title and author of papers.
|
||||
The extraction output schema can be set in `chain.py`.
|
||||
|
||||
## LLM
|
||||
## Environment Setup
|
||||
|
||||
This template will use `OpenAI` by default.
|
||||
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
|
||||
|
||||
Be sure that `OPENAI_API_KEY` is set in your environment.
|
||||
## Usage
|
||||
|
||||
To use this package, you should first have the LangChain CLI installed:
|
||||
|
||||
```shell
|
||||
pip install -U "langchain-cli[serve]"
|
||||
```
|
||||
|
||||
To create a new LangChain project and install this as the only package, you can do:
|
||||
|
||||
```shell
|
||||
langchain app new my-app --package extraction-openai-functions
|
||||
```
|
||||
|
||||
If you want to add this to an existing project, you can just run:
|
||||
|
||||
```shell
|
||||
langchain app add extraction-openai-functions
|
||||
```
|
||||
|
||||
And add the following code to your `server.py` file:
|
||||
```python
|
||||
from extraction_openai_functions import chain as extraction_openai_functions_chain
|
||||
|
||||
add_routes(app, extraction_openai_functions_chain, path="/extraction-openai-functions")
|
||||
```
|
||||
|
||||
(Optional) Let's now configure LangSmith.
|
||||
LangSmith will help us trace, monitor and debug LangChain applications.
|
||||
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
|
||||
If you don't have access, you can skip this section
|
||||
|
||||
|
||||
```shell
|
||||
export LANGCHAIN_TRACING_V2=true
|
||||
export LANGCHAIN_API_KEY=<your-api-key>
|
||||
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
|
||||
```
|
||||
|
||||
If you are inside this directory, then you can spin up a LangServe instance directly by:
|
||||
|
||||
```shell
|
||||
langchain serve
|
||||
```
|
||||
|
||||
This will start the FastAPI app with a server is running locally at
|
||||
[http://localhost:8000](http://localhost:8000)
|
||||
|
||||
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
|
||||
We can access the playground at [http://127.0.0.1:8000/extraction-openai-functions/playground](http://127.0.0.1:8000/extraction-openai-functions/playground)
|
||||
|
||||
We can access the template from code with:
|
||||
|
||||
```python
|
||||
from langserve.client import RemoteRunnable
|
||||
|
||||
runnable = RemoteRunnable("http://localhost:8000/extraction-openai-functions")
|
||||
```
|
||||
By default, this package is set to extract the title and author of papers, as specified in the `chain.py` file.
|
||||
|
||||
LLM is leveraged by the OpenAI function by default.
|
||||
|
@ -1,9 +1,73 @@
|
||||
|
||||
# guardrails-output-parser
|
||||
|
||||
Uses [guardrails-ai](https://github.com/guardrails-ai/guardrails) to validate output.
|
||||
This template uses [guardrails-ai](https://github.com/guardrails-ai/guardrails) to validate LLM output.
|
||||
|
||||
The `GuardrailsOutputParser` is set in `chain.py`.
|
||||
|
||||
The default example protects against profanity.
|
||||
|
||||
## Environment Setup
|
||||
|
||||
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
|
||||
|
||||
## Usage
|
||||
|
||||
To use this package, you should first have the LangChain CLI installed:
|
||||
|
||||
```shell
|
||||
pip install -U "langchain-cli[serve]"
|
||||
```
|
||||
|
||||
To create a new LangChain project and install this as the only package, you can do:
|
||||
|
||||
```shell
|
||||
langchain app new my-app --package guardrails-output-parser
|
||||
```
|
||||
|
||||
If you want to add this to an existing project, you can just run:
|
||||
|
||||
```shell
|
||||
langchain app add guardrails-output-parser
|
||||
```
|
||||
|
||||
And add the following code to your `server.py` file:
|
||||
```python
|
||||
from guardrails_output_parser import chain as guardrails_output_parser_chain
|
||||
|
||||
add_routes(app, guardrails_output_parser_chain, path="/guardrails-output-parser")
|
||||
```
|
||||
|
||||
(Optional) Let's now configure LangSmith.
|
||||
LangSmith will help us trace, monitor and debug LangChain applications.
|
||||
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
|
||||
If you don't have access, you can skip this section
|
||||
|
||||
|
||||
```shell
|
||||
export LANGCHAIN_TRACING_V2=true
|
||||
export LANGCHAIN_API_KEY=<your-api-key>
|
||||
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
|
||||
```
|
||||
|
||||
If you are inside this directory, then you can spin up a LangServe instance directly by:
|
||||
|
||||
```shell
|
||||
langchain serve
|
||||
```
|
||||
|
||||
This will start the FastAPI app with a server is running locally at
|
||||
[http://localhost:8000](http://localhost:8000)
|
||||
|
||||
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
|
||||
We can access the playground at [http://127.0.0.1:8000/guardrails-output-parser/playground](http://127.0.0.1:8000/guardrails-output-parser/playground)
|
||||
|
||||
We can access the template from code with:
|
||||
|
||||
This example protects against profanity, but with Guardrails you can protect against a multitude of things.
|
||||
```python
|
||||
from langserve.client import RemoteRunnable
|
||||
|
||||
If Guardrails does not find any profanity, then the translated output is returned as is.
|
||||
runnable = RemoteRunnable("http://localhost:8000/guardrails-output-parser")
|
||||
```
|
||||
|
||||
If Guardrails does find profanity, then an empty string is returned.
|
||||
If Guardrails does not find any profanity, then the translated output is returned as is. If Guardrails does find profanity, then an empty string is returned.
|
||||
|
@ -1,18 +1,70 @@
|
||||
# Extraction with LLaMA2 Function Calling
|
||||
|
||||
This template shows how to do extraction of structured data from unstructured data, using LLaMA2 [fine-tuned for grammars and jsonschema](https://replicate.com/andreasjansson/llama-2-13b-chat-gguf).
|
||||
# llama2-functions
|
||||
|
||||
[Query transformations](https://blog.langchain.dev/query-transformations/) are one great application area for open source, private LLMs:
|
||||
This template performs extraction of structured data from unstructured data using a [LLaMA2 model that supports a specified JSON output schema](https://github.com/ggerganov/llama.cpp/blob/master/grammars/README.md).
|
||||
|
||||
* The tasks are often narrow and well-defined (e.g., generatae multiple questions from a user input)
|
||||
* They also are tasks that users may want to run locally (e.g., in a RAG workflow)
|
||||
The extraction schema can be set in `chain.py`.
|
||||
|
||||
Specify the scehma you want to extract in `chain.py`
|
||||
## Environment Setup
|
||||
|
||||
## LLM
|
||||
This will use a [LLaMA2-13b model hosted by Replicate](https://replicate.com/andreasjansson/llama-2-13b-chat-gguf/versions).
|
||||
|
||||
This template will use a `Replicate` [hosted version](https://replicate.com/andreasjansson/llama-2-13b-chat-gguf) of LLaMA2 that has support for grammars and jsonschema.
|
||||
Ensure that `REPLICATE_API_TOKEN` is set in your environment.
|
||||
|
||||
Based on the `Replicate` example, the JSON schema is supplied directly in the prompt.
|
||||
## Usage
|
||||
|
||||
Be sure that `REPLICATE_API_TOKEN` is set in your environment.
|
||||
To use this package, you should first have the LangChain CLI installed:
|
||||
|
||||
```shell
|
||||
pip install -U "langchain-cli[serve]"
|
||||
```
|
||||
|
||||
To create a new LangChain project and install this as the only package, you can do:
|
||||
|
||||
```shell
|
||||
langchain app new my-app --package llama2-functions
|
||||
```
|
||||
|
||||
If you want to add this to an existing project, you can just run:
|
||||
|
||||
```shell
|
||||
langchain app add llama2-functions
|
||||
```
|
||||
|
||||
And add the following code to your `server.py` file:
|
||||
```python
|
||||
from llama2_functions import chain as llama2_functions_chain
|
||||
|
||||
add_routes(app, llama2_functions_chain, path="/llama2-functions")
|
||||
```
|
||||
|
||||
(Optional) Let's now configure LangSmith.
|
||||
LangSmith will help us trace, monitor and debug LangChain applications.
|
||||
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
|
||||
If you don't have access, you can skip this section
|
||||
|
||||
```shell
|
||||
export LANGCHAIN_TRACING_V2=true
|
||||
export LANGCHAIN_API_KEY=<your-api-key>
|
||||
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
|
||||
```
|
||||
|
||||
If you are inside this directory, then you can spin up a LangServe instance directly by:
|
||||
|
||||
```shell
|
||||
langchain serve
|
||||
```
|
||||
|
||||
This will start the FastAPI app with a server is running locally at
|
||||
[http://localhost:8000](http://localhost:8000)
|
||||
|
||||
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
|
||||
We can access the playground at [http://127.0.0.1:8000/llama2-functions/playground](http://127.0.0.1:8000/llama2-functions/playground)
|
||||
|
||||
We can access the template from code with:
|
||||
|
||||
```python
|
||||
from langserve.client import RemoteRunnable
|
||||
|
||||
runnable = RemoteRunnable("http://localhost:8000/llama2-functions")
|
||||
```
|
||||
|
@ -1,16 +1,72 @@
|
||||
# OpenAI Functions Agent
|
||||
|
||||
This template creates an agent that uses OpenAI function calling to communicate its decisions of what actions to take.
|
||||
This example creates an agent that can optionally look up things on the internet using Tavily's search engine.
|
||||
# openai-functions-agent
|
||||
|
||||
## LLM
|
||||
This template creates an agent that uses OpenAI function calling to communicate its decisions on what actions to take.
|
||||
|
||||
This template will use `OpenAI` by default.
|
||||
This example creates an agent that can optionally look up information on the internet using Tavily's search engine.
|
||||
|
||||
Be sure that `OPENAI_API_KEY` is set in your environment.
|
||||
## Environment Setup
|
||||
|
||||
## Tools
|
||||
The following environment variables need to be set:
|
||||
|
||||
This template will use `Tavily` by default.
|
||||
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
|
||||
|
||||
Be sure that `TAVILY_API_KEY` is set in your environment.
|
||||
Set the `TAVILY_API_KEY` environment variable to access Tavily.
|
||||
|
||||
## Usage
|
||||
|
||||
To use this package, you should first have the LangChain CLI installed:
|
||||
|
||||
```shell
|
||||
pip install -U "langchain-cli[serve]"
|
||||
```
|
||||
|
||||
To create a new LangChain project and install this as the only package, you can do:
|
||||
|
||||
```shell
|
||||
langchain app new my-app --package openai-functions-agent
|
||||
```
|
||||
|
||||
If you want to add this to an existing project, you can just run:
|
||||
|
||||
```shell
|
||||
langchain app add openai-functions-agent
|
||||
```
|
||||
|
||||
And add the following code to your `server.py` file:
|
||||
```python
|
||||
from openai_functions_agent import chain as openai_functions_agent_chain
|
||||
|
||||
add_routes(app, openai_functions_agent_chain, path="/openai-functions-agent")
|
||||
```
|
||||
|
||||
(Optional) Let's now configure LangSmith.
|
||||
LangSmith will help us trace, monitor and debug LangChain applications.
|
||||
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
|
||||
If you don't have access, you can skip this section
|
||||
|
||||
```shell
|
||||
export LANGCHAIN_TRACING_V2=true
|
||||
export LANGCHAIN_API_KEY=<your-api-key>
|
||||
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
|
||||
```
|
||||
|
||||
If you are inside this directory, then you can spin up a LangServe instance directly by:
|
||||
|
||||
```shell
|
||||
langchain serve
|
||||
```
|
||||
|
||||
This will start the FastAPI app with a server is running locally at
|
||||
[http://localhost:8000](http://localhost:8000)
|
||||
|
||||
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
|
||||
We can access the playground at [http://127.0.0.1:8000/openai-functions-agent/playground](http://127.0.0.1:8000/openai-functions-agent/playground)
|
||||
|
||||
We can access the template from code with:
|
||||
|
||||
```python
|
||||
from langserve.client import RemoteRunnable
|
||||
|
||||
runnable = RemoteRunnable("http://localhost:8000/openai-functions-agent")
|
||||
```
|
@ -1,9 +1,67 @@
|
||||
|
||||
# pirate-speak
|
||||
|
||||
This simple application converts user input into pirate speak
|
||||
This template converts user input into pirate speak.
|
||||
|
||||
## Environment Setup
|
||||
|
||||
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
|
||||
|
||||
## Usage
|
||||
|
||||
To use this package, you should first have the LangChain CLI installed:
|
||||
|
||||
```shell
|
||||
pip install -U "langchain-cli[serve]"
|
||||
```
|
||||
|
||||
To create a new LangChain project and install this as the only package, you can do:
|
||||
|
||||
```shell
|
||||
langchain app new my-app --package pirate-speak
|
||||
```
|
||||
|
||||
If you want to add this to an existing project, you can just run:
|
||||
|
||||
```shell
|
||||
langchain app add pirate-speak
|
||||
```
|
||||
|
||||
And add the following code to your `server.py` file:
|
||||
```python
|
||||
from pirate_speak import chain as pirate_speak_chain
|
||||
|
||||
add_routes(app, pirate_speak_chain, path="/pirate-speak")
|
||||
```
|
||||
|
||||
(Optional) Let's now configure LangSmith.
|
||||
LangSmith will help us trace, monitor and debug LangChain applications.
|
||||
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
|
||||
If you don't have access, you can skip this section
|
||||
|
||||
|
||||
```shell
|
||||
export LANGCHAIN_TRACING_V2=true
|
||||
export LANGCHAIN_API_KEY=<your-api-key>
|
||||
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
|
||||
```
|
||||
|
||||
If you are inside this directory, then you can spin up a LangServe instance directly by:
|
||||
|
||||
```shell
|
||||
langchain serve
|
||||
```
|
||||
|
||||
This will start the FastAPI app with a server is running locally at
|
||||
[http://localhost:8000](http://localhost:8000)
|
||||
|
||||
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
|
||||
We can access the playground at [http://127.0.0.1:8000/pirate-speak/playground](http://127.0.0.1:8000/pirate-speak/playground)
|
||||
|
||||
## LLM
|
||||
We can access the template from code with:
|
||||
|
||||
This template will use `OpenAI` by default.
|
||||
```python
|
||||
from langserve.client import RemoteRunnable
|
||||
|
||||
Be sure that `OPENAI_API_KEY` is set in your environment.
|
||||
runnable = RemoteRunnable("http://localhost:8000/pirate-speak")
|
||||
```
|
||||
|
@ -1 +1,68 @@
|
||||
|
||||
# plate-chain
|
||||
|
||||
This template enables parsing of data from laboratory plates.
|
||||
|
||||
In the context of biochemistry or molecular biology, laboratory plates are commonly used tools to hold samples in a grid-like format.
|
||||
|
||||
This can parse the resulting data into standardized (e.g., JSON) format for further processing.
|
||||
|
||||
## Environment Setup
|
||||
|
||||
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
|
||||
|
||||
## Usage
|
||||
|
||||
To utilize plate-chain, you must have the LangChain CLI installed:
|
||||
|
||||
```shell
|
||||
pip install -U "langchain-cli[serve]"
|
||||
```
|
||||
|
||||
Creating a new LangChain project and installing plate-chain as the only package can be done with:
|
||||
|
||||
```shell
|
||||
langchain app new my-app --package plate-chain
|
||||
```
|
||||
|
||||
If you wish to add this to an existing project, simply run:
|
||||
|
||||
```shell
|
||||
langchain app add plate-chain
|
||||
```
|
||||
|
||||
Then add the following code to your `server.py` file:
|
||||
|
||||
```python
|
||||
from plate_chain import chain as plate_chain_chain
|
||||
|
||||
add_routes(app, plate_chain_chain, path="/plate-chain")
|
||||
```
|
||||
|
||||
(Optional) For configuring LangSmith, which helps trace, monitor and debug LangChain applications, use the following code:
|
||||
|
||||
```shell
|
||||
export LANGCHAIN_TRACING_V2=true
|
||||
export LANGCHAIN_API_KEY=<your-api-key>
|
||||
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
|
||||
```
|
||||
|
||||
If you're in this directory, you can start a LangServe instance directly by:
|
||||
|
||||
```shell
|
||||
langchain serve
|
||||
```
|
||||
|
||||
This starts the FastAPI app with a server running locally at
|
||||
[http://localhost:8000](http://localhost:8000)
|
||||
|
||||
All templates can be viewed at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
|
||||
Access the playground at [http://127.0.0.1:8000/plate-chain/playground](http://127.0.0.1:8000/plate-chain/playground)
|
||||
|
||||
You can access the template from code with:
|
||||
|
||||
```python
|
||||
from langserve.client import RemoteRunnable
|
||||
|
||||
runnable = RemoteRunnable("http://localhost:8000/plate-chain")
|
||||
```
|
@ -1,27 +1,78 @@
|
||||
# RAG AWS Bedrock
|
||||
|
||||
AWS Bedrock is a managed serve that offers a set of foundation models.
|
||||
# rag-aws-bedrock
|
||||
|
||||
Here we will use `Anthropic Claude` for text generation and `Amazon Titan` for text embedding.
|
||||
This template is designed to connect with the AWS Bedrock service, a managed server that offers a set of foundation models.
|
||||
|
||||
We will use FAISS as our vectorstore.
|
||||
It primarily uses the `Anthropic Claude` for text generation and `Amazon Titan` for text embedding, and utilizes FAISS as the vectorstore.
|
||||
|
||||
(See [this notebook](https://github.com/aws-samples/amazon-bedrock-workshop/blob/main/03_QuestionAnswering/01_qa_w_rag_claude.ipynb) for additional context on the RAG pipeline.)
|
||||
For additional context on the RAG pipeline, refer to [this notebook](https://github.com/aws-samples/amazon-bedrock-workshop/blob/main/03_QuestionAnswering/01_qa_w_rag_claude.ipynb).
|
||||
|
||||
Code here uses the `boto3` library to connect with the Bedrock service. See [this page](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/quickstart.html#configuration) for setting up and configuring boto3 to work with an AWS account.
|
||||
## Environment Setup
|
||||
|
||||
## FAISS
|
||||
Before you can use this package, ensure that you have configured `boto3` to work with your AWS account.
|
||||
|
||||
You need to install the `faiss-cpu` package to work with the FAISS vector store.
|
||||
For details on how to set up and configure `boto3`, visit [this page](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/quickstart.html#configuration).
|
||||
|
||||
In addition, you need to install the `faiss-cpu` package to work with the FAISS vector store:
|
||||
|
||||
```bash
|
||||
pip install faiss-cpu
|
||||
```
|
||||
|
||||
|
||||
## LLM and Embeddings
|
||||
|
||||
The code assumes that you are working with the `default` AWS profile and `us-east-1` region. If not, specify these environment variables to reflect the correct region and AWS profile.
|
||||
You should also set the following environment variables to reflect your AWS profile and region (if you're not using the `default` AWS profile and `us-east-1` region):
|
||||
|
||||
* `AWS_DEFAULT_REGION`
|
||||
* `AWS_PROFILE`
|
||||
|
||||
## Usage
|
||||
|
||||
First, install the LangChain CLI:
|
||||
|
||||
```shell
|
||||
pip install -U "langchain-cli[serve]"
|
||||
```
|
||||
|
||||
To create a new LangChain project and install this as the only package:
|
||||
|
||||
```shell
|
||||
langchain app new my-app --package rag-aws-bedrock
|
||||
```
|
||||
|
||||
To add this package to an existing project:
|
||||
|
||||
```shell
|
||||
langchain app add rag-aws-bedrock
|
||||
```
|
||||
|
||||
Then add the following code to your `server.py` file:
|
||||
```python
|
||||
from rag_aws_bedrock import chain as rag_aws_bedrock_chain
|
||||
|
||||
add_routes(app, rag_aws_bedrock_chain, path="/rag-aws-bedrock")
|
||||
```
|
||||
|
||||
(Optional) If you have access to LangSmith, you can configure it to trace, monitor, and debug LangChain applications. If you don't have access, you can skip this section.
|
||||
|
||||
```shell
|
||||
export LANGCHAIN_TRACING_V2=true
|
||||
export LANGCHAIN_API_KEY=<your-api-key>
|
||||
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
|
||||
```
|
||||
|
||||
If you are inside this directory, you can spin up a LangServe instance directly by:
|
||||
|
||||
```shell
|
||||
langchain serve
|
||||
```
|
||||
|
||||
This will start the FastAPI app with a server running locally at [http://localhost:8000](http://localhost:8000)
|
||||
|
||||
You can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs) and access the playground at [http://127.0.0.1:8000/rag-aws-bedrock/playground](http://127.0.0.1:8000/rag-aws-bedrock/playground).
|
||||
|
||||
You can access the template from code with:
|
||||
|
||||
```python
|
||||
from langserve.client import RemoteRunnable
|
||||
|
||||
runnable = RemoteRunnable("http://localhost:8000/rag-aws-bedrock")
|
||||
```
|
@ -1,29 +1,79 @@
|
||||
# Private RAG
|
||||
|
||||
This template performs privae RAG (no reliance on external APIs) using:
|
||||
# rag-chroma-private
|
||||
|
||||
* Ollama for the LLM
|
||||
* GPT4All for embeddings
|
||||
This template performs RAG with no reliance on external APIs.
|
||||
|
||||
## LLM
|
||||
It utilizes Ollama the LLM, GPT4All for embeddings, and Chroma for the vectorstore.
|
||||
|
||||
Follow instructions [here](https://python.langchain.com/docs/integrations/chat/ollama) to download Ollama.
|
||||
The vectorstore is created in `chain.py` and by default indexes a [popular blog posts on Agents](https://lilianweng.github.io/posts/2023-06-23-agent/) for question-answering.
|
||||
|
||||
The instructions also show how to download your LLM of interest with Ollama:
|
||||
## Environment Setup
|
||||
|
||||
* This template uses `llama2:7b-chat`
|
||||
* But you can pick from many [here](https://ollama.ai/library)
|
||||
To set up the environment, you need to download Ollama.
|
||||
|
||||
## Set up local embeddings
|
||||
Follow the instructions [here](https://python.langchain.com/docs/integrations/chat/ollama).
|
||||
|
||||
This will use [GPT4All](https://python.langchain.com/docs/integrations/text_embedding/gpt4all) embeddings.
|
||||
You can choose the desired LLM with Ollama.
|
||||
|
||||
## Chroma
|
||||
This template uses `llama2:7b-chat`, which can be accessed using `ollama pull llama2:7b-chat`.
|
||||
|
||||
[Chroma](https://python.langchain.com/docs/integrations/vectorstores/chroma) is an open-source vector database.
|
||||
There are many other options available [here](https://ollama.ai/library).
|
||||
|
||||
This template will create and add documents to the vector database in `chain.py`.
|
||||
This package also uses [GPT4All](https://python.langchain.com/docs/integrations/text_embedding/gpt4all) embeddings.
|
||||
|
||||
By default, this will load a popular blog post on agents.
|
||||
## Usage
|
||||
|
||||
However, you can choose from a large number of document loaders [here](https://python.langchain.com/docs/integrations/document_loaders).
|
||||
To use this package, you should first have the LangChain CLI installed:
|
||||
|
||||
```shell
|
||||
pip install -U "langchain-cli[serve]"
|
||||
```
|
||||
|
||||
To create a new LangChain project and install this as the only package, you can do:
|
||||
|
||||
```shell
|
||||
langchain app new my-app --package rag-chroma-private
|
||||
```
|
||||
|
||||
If you want to add this to an existing project, you can just run:
|
||||
|
||||
```shell
|
||||
langchain app add rag-chroma-private
|
||||
```
|
||||
|
||||
And add the following code to your `server.py` file:
|
||||
```python
|
||||
from rag_chroma_private import chain as rag_chroma_private_chain
|
||||
|
||||
add_routes(app, rag_chroma_private_chain, path="/rag-chroma-private")
|
||||
```
|
||||
|
||||
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/). If you don't have access, you can skip this section
|
||||
|
||||
```shell
|
||||
export LANGCHAIN_TRACING_V2=true
|
||||
export LANGCHAIN_API_KEY=<your-api-key>
|
||||
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
|
||||
```
|
||||
|
||||
If you are inside this directory, then you can spin up a LangServe instance directly by:
|
||||
|
||||
```shell
|
||||
langchain serve
|
||||
```
|
||||
|
||||
This will start the FastAPI app with a server is running locally at
|
||||
[http://localhost:8000](http://localhost:8000)
|
||||
|
||||
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
|
||||
We can access the playground at [http://127.0.0.1:8000/rag-chroma-private/playground](http://127.0.0.1:8000/rag-chroma-private/playground)
|
||||
|
||||
We can access the template from code with:
|
||||
|
||||
```python
|
||||
from langserve.client import RemoteRunnable
|
||||
|
||||
runnable = RemoteRunnable("http://localhost:8000/rag-chroma-private")
|
||||
```
|
||||
|
||||
The package will create and add documents to the vector database in `chain.py`. By default, it will load a popular blog post on agents. However, you can choose from a large number of document loaders [here](https://python.langchain.com/docs/integrations/document_loaders).
|
||||
|
@ -1,15 +1,68 @@
|
||||
# RAG Chroma
|
||||
|
||||
# rag-chroma
|
||||
|
||||
This template performs RAG using Chroma and OpenAI.
|
||||
|
||||
## Chroma
|
||||
The vectorstore is created in `chain.py` and by default indexes a [popular blog posts on Agents](https://lilianweng.github.io/posts/2023-06-23-agent/) for question-answering.
|
||||
|
||||
## Environment Setup
|
||||
|
||||
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
|
||||
|
||||
## Usage
|
||||
|
||||
To use this package, you should first have the LangChain CLI installed:
|
||||
|
||||
```shell
|
||||
pip install -U "langchain-cli[serve]"
|
||||
```
|
||||
|
||||
To create a new LangChain project and install this as the only package, you can do:
|
||||
|
||||
```shell
|
||||
langchain app new my-app --package rag-chroma
|
||||
```
|
||||
|
||||
If you want to add this to an existing project, you can just run:
|
||||
|
||||
```shell
|
||||
langchain app add rag-chroma
|
||||
```
|
||||
|
||||
And add the following code to your `server.py` file:
|
||||
```python
|
||||
from rag_chroma import chain as rag_chroma_chain
|
||||
|
||||
add_routes(app, rag_chroma_chain, path="/rag-chroma")
|
||||
```
|
||||
|
||||
(Optional) Let's now configure LangSmith.
|
||||
LangSmith will help us trace, monitor and debug LangChain applications.
|
||||
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
|
||||
If you don't have access, you can skip this section
|
||||
|
||||
```shell
|
||||
export LANGCHAIN_TRACING_V2=true
|
||||
export LANGCHAIN_API_KEY=<your-api-key>
|
||||
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
|
||||
```
|
||||
|
||||
If you are inside this directory, then you can spin up a LangServe instance directly by:
|
||||
|
||||
```shell
|
||||
langchain serve
|
||||
```
|
||||
|
||||
[Chroma](https://python.langchain.com/docs/integrations/vectorstores/chroma) is an open-source vector database.
|
||||
This will start the FastAPI app with a server is running locally at
|
||||
[http://localhost:8000](http://localhost:8000)
|
||||
|
||||
This template will create and add documents to the vector database in `chain.py`.
|
||||
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
|
||||
We can access the playground at [http://127.0.0.1:8000/rag-chroma/playground](http://127.0.0.1:8000/rag-chroma/playground)
|
||||
|
||||
These documents can be loaded from [many sources](https://python.langchain.com/docs/integrations/document_loaders).
|
||||
We can access the template from code with:
|
||||
|
||||
## LLM
|
||||
```python
|
||||
from langserve.client import RemoteRunnable
|
||||
|
||||
Be sure that `OPENAI_API_KEY` is set in order to the OpenAI models.
|
||||
runnable = RemoteRunnable("http://localhost:8000/rag-chroma")
|
||||
```
|
@ -1,15 +1,70 @@
|
||||
# RAG Codellama Fireworks
|
||||
|
||||
TODO: Add context from below links
|
||||
|
||||
https://blog.fireworks.ai/accelerating-code-completion-with-fireworks-fast-llm-inference-f4e8b5ec534a
|
||||
https://blog.fireworks.ai/simplifying-code-infilling-with-code-llama-and-fireworks-ai-92c9bb06e29c
|
||||
# rag-codellama-fireworks
|
||||
|
||||
This template performs RAG on a codebase.
|
||||
|
||||
It uses codellama-34b hosted by Fireworks' [LLM inference API](https://blog.fireworks.ai/accelerating-code-completion-with-fireworks-fast-llm-inference-f4e8b5ec534a).
|
||||
|
||||
## Environment Setup
|
||||
|
||||
TODO: Add API keys
|
||||
Set the `FIREWORKS_API_KEY` environment variable to access the Fireworks models.
|
||||
|
||||
You can obtain it from [here](https://app.fireworks.ai/login?callbackURL=https://app.fireworks.ai).
|
||||
|
||||
## Usage
|
||||
|
||||
To use this package, you should first have the LangChain CLI installed:
|
||||
|
||||
```shell
|
||||
pip install -U "langchain-cli[serve]"
|
||||
```
|
||||
|
||||
To create a new LangChain project and install this as the only package, you can do:
|
||||
|
||||
```shell
|
||||
langchain app new my-app --package rag-codellama-fireworks
|
||||
```
|
||||
|
||||
If you want to add this to an existing project, you can just run:
|
||||
|
||||
```shell
|
||||
langchain app add rag-codellama-fireworks
|
||||
```
|
||||
|
||||
And add the following code to your `server.py` file:
|
||||
```python
|
||||
from rag_codellama_fireworks import chain as rag_codellama_fireworks_chain
|
||||
|
||||
add_routes(app, rag_codellama_fireworks_chain, path="/rag-codellama-fireworks")
|
||||
```
|
||||
|
||||
(Optional) Let's now configure LangSmith.
|
||||
LangSmith will help us trace, monitor and debug LangChain applications.
|
||||
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
|
||||
If you don't have access, you can skip this section
|
||||
|
||||
```shell
|
||||
export LANGCHAIN_TRACING_V2=true
|
||||
export LANGCHAIN_API_KEY=<your-api-key>
|
||||
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
|
||||
```
|
||||
|
||||
If you are inside this directory, then you can spin up a LangServe instance directly by:
|
||||
|
||||
```shell
|
||||
langchain serve
|
||||
```
|
||||
|
||||
This will start the FastAPI app with a server is running locally at
|
||||
[http://localhost:8000](http://localhost:8000)
|
||||
|
||||
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
|
||||
We can access the playground at [http://127.0.0.1:8000/rag-codellama-fireworks/playground](http://127.0.0.1:8000/rag-codellama-fireworks/playground)
|
||||
|
||||
We can access the template from code with:
|
||||
|
||||
```python
|
||||
from langserve.client import RemoteRunnable
|
||||
|
||||
FIREWORKS_API_KEY
|
||||
https://python.langchain.com/docs/integrations/llms/fireworks
|
||||
https://app.fireworks.ai/login?callbackURL=https://app.fireworks.ai
|
||||
runnable = RemoteRunnable("http://localhost:8000/rag-codellama-fireworks")
|
||||
```
|
||||
|
@ -1,13 +1,70 @@
|
||||
# Conversational RAG
|
||||
|
||||
This template performs [conversational](https://python.langchain.com/docs/expression_language/cookbook/retrieval#conversational-retrieval-chain) [retrieval](https://python.langchain.com/docs/use_cases/question_answering/), which is one of the most popular LLM use-cases.
|
||||
# rag-conversation
|
||||
|
||||
This template is used for [conversational](https://python.langchain.com/docs/expression_language/cookbook/retrieval#conversational-retrieval-chain) [retrieval](https://python.langchain.com/docs/use_cases/question_answering/), which is one of the most popular LLM use-cases.
|
||||
|
||||
It passes both a conversation history and retrieved documents into an LLM for synthesis.
|
||||
|
||||
## LLM
|
||||
## Environment Setup
|
||||
|
||||
This template uses Pinecone as a vectorstore and requires that `PINECONE_API_KEY`, `PINECONE_ENVIRONMENT`, and `PINECONE_INDEX` are set.
|
||||
|
||||
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
|
||||
|
||||
## Usage
|
||||
|
||||
To use this package, you should first have the LangChain CLI installed:
|
||||
|
||||
```shell
|
||||
pip install -U "langchain-cli[serve]"
|
||||
```
|
||||
|
||||
To create a new LangChain project and install this as the only package, you can do:
|
||||
|
||||
```shell
|
||||
langchain app new my-app --package rag-conversation
|
||||
```
|
||||
|
||||
If you want to add this to an existing project, you can just run:
|
||||
|
||||
```shell
|
||||
langchain app add rag-conversation
|
||||
```
|
||||
|
||||
And add the following code to your `server.py` file:
|
||||
```python
|
||||
from rag_conversation import chain as rag_conversation_chain
|
||||
|
||||
add_routes(app, rag_conversation_chain, path="/rag-conversation")
|
||||
```
|
||||
|
||||
(Optional) Let's now configure LangSmith.
|
||||
LangSmith will help us trace, monitor and debug LangChain applications.
|
||||
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
|
||||
If you don't have access, you can skip this section
|
||||
|
||||
```shell
|
||||
export LANGCHAIN_TRACING_V2=true
|
||||
export LANGCHAIN_API_KEY=<your-api-key>
|
||||
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
|
||||
```
|
||||
|
||||
If you are inside this directory, then you can spin up a LangServe instance directly by:
|
||||
|
||||
```shell
|
||||
langchain serve
|
||||
```
|
||||
|
||||
This will start the FastAPI app with a server is running locally at
|
||||
[http://localhost:8000](http://localhost:8000)
|
||||
|
||||
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
|
||||
We can access the playground at [http://127.0.0.1:8000/rag-conversation/playground](http://127.0.0.1:8000/rag-conversation/playground)
|
||||
|
||||
Be sure that `OPENAI_API_KEY` is set in order to use the OpenAI models.
|
||||
We can access the template from code with:
|
||||
|
||||
## Pinecone
|
||||
```python
|
||||
from langserve.client import RemoteRunnable
|
||||
|
||||
This template uses Pinecone as a vectorstore and requires that `PINECONE_API_KEY`, `PINECONE_ENVIRONMENT`, and `PINECONE_INDEX` are set.
|
||||
runnable = RemoteRunnable("http://localhost:8000/rag-conversation")
|
||||
```
|
||||
|
@ -1,59 +1,89 @@
|
||||
# Elasticsearch RAG Example
|
||||
|
||||
Using Langserve and ElasticSearch to build a RAG search example for answering questions on workplace documents.
|
||||
# rag-elasticsearch
|
||||
|
||||
Relies on sentence transformer `MiniLM-L6-v2` for embedding passages and questions.
|
||||
This template performs RAG using ElasticSearch.
|
||||
|
||||
## Running Elasticsearch
|
||||
It relies on sentence transformer `MiniLM-L6-v2` for embedding passages and questions.
|
||||
|
||||
There are a number of ways to run Elasticsearch.
|
||||
## Environment Setup
|
||||
|
||||
### Elastic Cloud
|
||||
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
|
||||
|
||||
Create a free trial account on [Elastic Cloud](https://cloud.elastic.co/registration?utm_source=langchain&utm_content=langserve).
|
||||
|
||||
Once you have created an account, you can create a deployment. With a deployment, you can use these environment variables to connect to your Elasticsearch instance:
|
||||
To connect to your Elasticsearch instance, use the following environment variables:
|
||||
|
||||
```bash
|
||||
export ELASTIC_CLOUD_ID = <ClOUD_ID>
|
||||
export ELASTIC_USERNAME = <ClOUD_USERNAME>
|
||||
export ELASTIC_PASSWORD = <ClOUD_PASSWORD>
|
||||
```
|
||||
For local development with Docker, use:
|
||||
|
||||
```bash
|
||||
export ES_URL = "http://localhost:9200"
|
||||
```
|
||||
|
||||
### Docker
|
||||
## Usage
|
||||
|
||||
For local development, you can use Docker:
|
||||
To use this package, you should first have the LangChain CLI installed:
|
||||
|
||||
```bash
|
||||
docker run -p 9200:9200 \
|
||||
-e "discovery.type=single-node" \
|
||||
-e "xpack.security.enabled=false" \
|
||||
-e "xpack.security.http.ssl.enabled=false" \
|
||||
-e "xpack.license.self_generated.type=trial" \
|
||||
docker.elastic.co/elasticsearch/elasticsearch:8.10.0
|
||||
```shell
|
||||
pip install -U "langchain-cli[serve]"
|
||||
```
|
||||
|
||||
This will run Elasticsearch on port 9200. You can then check that it is running by visiting [http://localhost:9200](http://localhost:9200).
|
||||
To create a new LangChain project and install this as the only package, you can do:
|
||||
|
||||
With a deployment, you can use these environment variables to connect to your Elasticsearch instance:
|
||||
```shell
|
||||
langchain app new my-app --package rag-elasticsearch
|
||||
```
|
||||
|
||||
```bash
|
||||
export ES_URL = "http://localhost:9200"
|
||||
If you want to add this to an existing project, you can just run:
|
||||
|
||||
```shell
|
||||
langchain app add rag-elasticsearch
|
||||
```
|
||||
|
||||
## Documents
|
||||
And add the following code to your `server.py` file:
|
||||
```python
|
||||
from rag_elasticsearch import chain as rag_elasticsearch_chain
|
||||
|
||||
To load fictional workplace documents, run the following command from the root of this repository:
|
||||
add_routes(app, rag_elasticsearch_chain, path="/rag-elasticsearch")
|
||||
```
|
||||
|
||||
```bash
|
||||
python ./data/load_documents.py
|
||||
(Optional) Let's now configure LangSmith.
|
||||
LangSmith will help us trace, monitor and debug LangChain applications.
|
||||
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
|
||||
If you don't have access, you can skip this section
|
||||
|
||||
```shell
|
||||
export LANGCHAIN_TRACING_V2=true
|
||||
export LANGCHAIN_API_KEY=<your-api-key>
|
||||
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
|
||||
```
|
||||
|
||||
If you are inside this directory, then you can spin up a LangServe instance directly by:
|
||||
|
||||
```shell
|
||||
langchain serve
|
||||
```
|
||||
|
||||
However, you can choose from a large number of document loaders [here](https://python.langchain.com/docs/integrations/document_loaders).
|
||||
This will start the FastAPI app with a server is running locally at
|
||||
[http://localhost:8000](http://localhost:8000)
|
||||
|
||||
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
|
||||
We can access the playground at [http://127.0.0.1:8000/rag-elasticsearch/playground](http://127.0.0.1:8000/rag-elasticsearch/playground)
|
||||
|
||||
## Installation
|
||||
We can access the template from code with:
|
||||
|
||||
```python
|
||||
from langserve.client import RemoteRunnable
|
||||
|
||||
runnable = RemoteRunnable("http://localhost:8000/rag-elasticsearch")
|
||||
```
|
||||
|
||||
For loading the fictional workplace documents, run the following command from the root of this repository:
|
||||
|
||||
```bash
|
||||
# from inside your LangServe instance
|
||||
poe add rag-elasticsearch
|
||||
python ./data/load_documents.py
|
||||
```
|
||||
|
||||
However, you can choose from a large number of document loaders [here](https://python.langchain.com/docs/integrations/document_loaders).
|
||||
|
@ -1,5 +1,69 @@
|
||||
# RAG Fusion
|
||||
|
||||
Re-implemented from [this GitHub repo](https://github.com/Raudaschl/rag-fusion), all credit to original author
|
||||
# rag-fusion
|
||||
|
||||
> RAG-Fusion, a search methodology that aims to bridge the gap between traditional search paradigms and the multifaceted dimensions of human queries. Inspired by the capabilities of Retrieval Augmented Generation (RAG), this project goes a step further by employing multiple query generation and Reciprocal Rank Fusion to re-rank search results.
|
||||
This template enables RAG fusion using a re-implementation of the project found [here](https://github.com/Raudaschl/rag-fusion).
|
||||
|
||||
It performs multiple query generation and Reciprocal Rank Fusion to re-rank search results.
|
||||
|
||||
## Environment Setup
|
||||
|
||||
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
|
||||
|
||||
## Usage
|
||||
|
||||
To use this package, you should first have the LangChain CLI installed:
|
||||
|
||||
```shell
|
||||
pip install -U "langchain-cli[serve]"
|
||||
```
|
||||
|
||||
To create a new LangChain project and install this as the only package, you can do:
|
||||
|
||||
```shell
|
||||
langchain app new my-app --package rag-fusion
|
||||
```
|
||||
|
||||
If you want to add this to an existing project, you can just run:
|
||||
|
||||
```shell
|
||||
langchain app add rag-fusion
|
||||
```
|
||||
|
||||
And add the following code to your `server.py` file:
|
||||
```python
|
||||
from rag_fusion import chain as rag_fusion_chain
|
||||
|
||||
add_routes(app, rag_fusion_chain, path="/rag-fusion")
|
||||
```
|
||||
|
||||
(Optional) Let's now configure LangSmith.
|
||||
LangSmith will help us trace, monitor and debug LangChain applications.
|
||||
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
|
||||
If you don't have access, you can skip this section
|
||||
|
||||
|
||||
```shell
|
||||
export LANGCHAIN_TRACING_V2=true
|
||||
export LANGCHAIN_API_KEY=<your-api-key>
|
||||
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
|
||||
```
|
||||
|
||||
If you are inside this directory, then you can spin up a LangServe instance directly by:
|
||||
|
||||
```shell
|
||||
langchain serve
|
||||
```
|
||||
|
||||
This will start the FastAPI app with a server is running locally at
|
||||
[http://localhost:8000](http://localhost:8000)
|
||||
|
||||
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
|
||||
We can access the playground at [http://127.0.0.1:8000/rag-fusion/playground](http://127.0.0.1:8000/rag-fusion/playground)
|
||||
|
||||
We can access the template from code with:
|
||||
|
||||
```python
|
||||
from langserve.client import RemoteRunnable
|
||||
|
||||
runnable = RemoteRunnable("http://localhost:8000/rag-fusion")
|
||||
```
|
@ -1,51 +1,69 @@
|
||||
# RAG Pinecone multi query
|
||||
|
||||
This template performs RAG using Pinecone and OpenAI with the [multi-query retriever](https://python.langchain.com/docs/modules/data_connection/retrievers/MultiQueryRetriever).
|
||||
# rag-pinecone-multi-query
|
||||
|
||||
This will use an LLM to generate multiple queries from different perspectives for a given user input query.
|
||||
This template performs RAG using Pinecone and OpenAI with a multi-query retriever.
|
||||
|
||||
It uses an LLM to generate multiple queries from different perspectives based on the user's input query.
|
||||
|
||||
For each query, it retrieves a set of relevant documents and takes the unique union across all queries for answer synthesis.
|
||||
|
||||
## Pinecone
|
||||
## Environment Setup
|
||||
|
||||
This template uses Pinecone as a vectorstore and requires that `PINECONE_API_KEY`, `PINECONE_ENVIRONMENT`, and `PINECONE_INDEX` are set.
|
||||
This template uses Pinecone as a vectorstore and requires that `PINECONE_API_KEY`, `PINECONE_ENVIRONMENT`, and `PINECONE_INDEX` are set.
|
||||
|
||||
## LLM
|
||||
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
|
||||
|
||||
Be sure that `OPENAI_API_KEY` is set in order to the OpenAI models.
|
||||
## Usage
|
||||
|
||||
## App
|
||||
To use this package, you should first install the LangChain CLI:
|
||||
|
||||
Example `server.py`:
|
||||
```shell
|
||||
pip install -U "langchain-cli[serve]"
|
||||
```
|
||||
from fastapi import FastAPI
|
||||
from langserve import add_routes
|
||||
from rag_pinecone_multi_query.chain import chain
|
||||
|
||||
app = FastAPI()
|
||||
To create a new LangChain project and install this package, do:
|
||||
|
||||
# Edit this to add the chain you want to add
|
||||
add_routes(app, chain, path="rag_pinecone_multi_query")
|
||||
```shell
|
||||
langchain app new my-app --package rag-pinecone-multi-query
|
||||
```
|
||||
|
||||
if __name__ == "__main__":
|
||||
import uvicorn
|
||||
To add this package to an existing project, run:
|
||||
|
||||
uvicorn.run(app, host="0.0.0.0", port=8001)
|
||||
```shell
|
||||
langchain app add rag-pinecone-multi-query
|
||||
```
|
||||
|
||||
Run:
|
||||
```
|
||||
python app/server.py
|
||||
```
|
||||
And add the following code to your `server.py` file:
|
||||
|
||||
Check endpoint:
|
||||
```python
|
||||
from rag_pinecone_multi_query import chain as rag_pinecone_multi_query_chain
|
||||
|
||||
add_routes(app, rag_pinecone_multi_query_chain, path="/rag-pinecone-multi-query")
|
||||
```
|
||||
http://0.0.0.0:8001/docs
|
||||
|
||||
(Optional) Now, let's configure LangSmith. LangSmith will help us trace, monitor, and debug LangChain applications. LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/). If you don't have access, you can skip this section
|
||||
|
||||
```shell
|
||||
export LANGCHAIN_TRACING_V2=true
|
||||
export LANGCHAIN_API_KEY=<your-api-key>
|
||||
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
|
||||
```
|
||||
|
||||
See `rag_pinecone_multi_query.ipynb` for example usage -
|
||||
If you are inside this directory, then you can spin up a LangServe instance directly by:
|
||||
|
||||
```shell
|
||||
langchain serve
|
||||
```
|
||||
|
||||
This will start the FastAPI app with a server running locally at [http://localhost:8000](http://localhost:8000)
|
||||
|
||||
You can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
|
||||
You can access the playground at [http://127.0.0.1:8000/rag-pinecone-multi-query/playground](http://127.0.0.1:8000/rag-pinecone-multi-query/playground)
|
||||
|
||||
To access the template from code, use:
|
||||
|
||||
```python
|
||||
from langserve.client import RemoteRunnable
|
||||
rag_app_pinecone = RemoteRunnable('http://0.0.0.0:8001/rag_pinecone_multi_query')
|
||||
rag_app_pinecone.invoke("What are the different types of agent memory")
|
||||
|
||||
runnable = RemoteRunnable("http://localhost:8000/rag-pinecone-multi-query")
|
||||
```
|
@ -1,23 +1,73 @@
|
||||
# RAG Pinecone Cohere Re-rank
|
||||
|
||||
This template performs RAG using Pinecone and OpenAI, with [Cohere to perform re-ranking](https://python.langchain.com/docs/integrations/retrievers/cohere-reranker) on returned documents.
|
||||
# rag-pinecone-rerank
|
||||
|
||||
[Re-ranking](https://docs.cohere.com/docs/reranking) provides a way to rank retrieved documents using specified filters or criteria.
|
||||
This template performs RAG using Pinecone and OpenAI along with [Cohere to perform re-ranking](https://txt.cohere.com/rerank/) on returned documents.
|
||||
|
||||
## Pinecone
|
||||
Re-ranking provides a way to rank retrieved documents using specified filters or criteria.
|
||||
|
||||
This connects to a hosted Pinecone vectorstore.
|
||||
## Environment Setup
|
||||
|
||||
Be sure that you have set a few env variables in `chain.py`:
|
||||
This template uses Pinecone as a vectorstore and requires that `PINECONE_API_KEY`, `PINECONE_ENVIRONMENT`, and `PINECONE_INDEX` are set.
|
||||
|
||||
* `PINECONE_API_KEY`
|
||||
* `PINECONE_ENV`
|
||||
* `index_name`
|
||||
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
|
||||
|
||||
## LLM
|
||||
Set the `COHERE_API_KEY` environment variable to access the Cohere ReRank.
|
||||
|
||||
Be sure that `OPENAI_API_KEY` is set in order to the OpenAI models.
|
||||
## Usage
|
||||
|
||||
## Cohere
|
||||
To use this package, you should first have the LangChain CLI installed:
|
||||
|
||||
Be sure that `COHERE_API_KEY` is set in order to the ReRank endpoint.
|
||||
```shell
|
||||
pip install -U "langchain-cli[serve]"
|
||||
```
|
||||
|
||||
To create a new LangChain project and install this as the only package, you can do:
|
||||
|
||||
```shell
|
||||
langchain app new my-app --package rag-pinecone-rerank
|
||||
```
|
||||
|
||||
If you want to add this to an existing project, you can just run:
|
||||
|
||||
```shell
|
||||
langchain app add rag-pinecone-rerank
|
||||
```
|
||||
|
||||
And add the following code to your `server.py` file:
|
||||
```python
|
||||
from rag_pinecone_rerank import chain as rag_pinecone_rerank_chain
|
||||
|
||||
add_routes(app, rag_pinecone_rerank_chain, path="/rag-pinecone-rerank")
|
||||
```
|
||||
|
||||
(Optional) Let's now configure LangSmith.
|
||||
LangSmith will help us trace, monitor and debug LangChain applications.
|
||||
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
|
||||
If you don't have access, you can skip this section
|
||||
|
||||
|
||||
```shell
|
||||
export LANGCHAIN_TRACING_V2=true
|
||||
export LANGCHAIN_API_KEY=<your-api-key>
|
||||
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
|
||||
```
|
||||
|
||||
If you are inside this directory, then you can spin up a LangServe instance directly by:
|
||||
|
||||
```shell
|
||||
langchain serve
|
||||
```
|
||||
|
||||
This will start the FastAPI app with a server is running locally at
|
||||
[http://localhost:8000](http://localhost:8000)
|
||||
|
||||
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
|
||||
We can access the playground at [http://127.0.0.1:8000/rag-pinecone-rerank/playground](http://127.0.0.1:8000/rag-pinecone-rerank/playground)
|
||||
|
||||
We can access the template from code with:
|
||||
|
||||
```python
|
||||
from langserve.client import RemoteRunnable
|
||||
|
||||
runnable = RemoteRunnable("http://localhost:8000/rag-pinecone-rerank")
|
||||
```
|
||||
|
@ -1,17 +1,69 @@
|
||||
# RAG Pinecone
|
||||
|
||||
# rag-pinecone
|
||||
|
||||
This template performs RAG using Pinecone and OpenAI.
|
||||
|
||||
## Pinecone
|
||||
## Environment Setup
|
||||
|
||||
This template uses Pinecone as a vectorstore and requires that `PINECONE_API_KEY`, `PINECONE_ENVIRONMENT`, and `PINECONE_INDEX` are set.
|
||||
|
||||
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
|
||||
|
||||
## Usage
|
||||
|
||||
To use this package, you should first have the LangChain CLI installed:
|
||||
|
||||
```shell
|
||||
pip install -U "langchain-cli[serve]"
|
||||
```
|
||||
|
||||
To create a new LangChain project and install this as the only package, you can do:
|
||||
|
||||
```shell
|
||||
langchain app new my-app --package rag-pinecone
|
||||
```
|
||||
|
||||
If you want to add this to an existing project, you can just run:
|
||||
|
||||
```shell
|
||||
langchain app add rag-pinecone
|
||||
```
|
||||
|
||||
And add the following code to your `server.py` file:
|
||||
```python
|
||||
from rag_pinecone import chain as rag_pinecone_chain
|
||||
|
||||
add_routes(app, rag_pinecone_chain, path="/rag-pinecone")
|
||||
```
|
||||
|
||||
(Optional) Let's now configure LangSmith.
|
||||
LangSmith will help us trace, monitor and debug LangChain applications.
|
||||
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
|
||||
If you don't have access, you can skip this section
|
||||
|
||||
|
||||
```shell
|
||||
export LANGCHAIN_TRACING_V2=true
|
||||
export LANGCHAIN_API_KEY=<your-api-key>
|
||||
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
|
||||
```
|
||||
|
||||
If you are inside this directory, then you can spin up a LangServe instance directly by:
|
||||
|
||||
```shell
|
||||
langchain serve
|
||||
```
|
||||
|
||||
This connects to a hosted Pinecone vectorstore.
|
||||
This will start the FastAPI app with a server is running locally at
|
||||
[http://localhost:8000](http://localhost:8000)
|
||||
|
||||
Be sure that you have set a few env variables in `chain.py`:
|
||||
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
|
||||
We can access the playground at [http://127.0.0.1:8000/rag-pinecone/playground](http://127.0.0.1:8000/rag-pinecone/playground)
|
||||
|
||||
* `PINECONE_API_KEY`
|
||||
* `PINECONE_ENV`
|
||||
* `index_name`
|
||||
We can access the template from code with:
|
||||
|
||||
## LLM
|
||||
```python
|
||||
from langserve.client import RemoteRunnable
|
||||
|
||||
Be sure that `OPENAI_API_KEY` is set in order to the OpenAI models.
|
||||
runnable = RemoteRunnable("http://localhost:8000/rag-pinecone")
|
||||
```
|
||||
|
@ -1,47 +1,75 @@
|
||||
# Semi structured RAG
|
||||
# rag-semi-structured
|
||||
|
||||
This template performs RAG on semi-structured data (e.g., a PDF with text and tables).
|
||||
This template performs RAG on semi-structured data, such as a PDF with text and tables.
|
||||
|
||||
See this [blog post](https://langchain-blog.ghost.io/ghost/#/editor/post/652dc74e0633850001e977d4) for useful background context.
|
||||
## Environment Setup
|
||||
|
||||
## Data loading
|
||||
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
|
||||
|
||||
We use [partition_pdf](https://unstructured-io.github.io/unstructured/bricks/partition.html#partition-pdf) from Unstructured to extract both table and text elements.
|
||||
This uses [Unstructured](https://unstructured-io.github.io/unstructured/) for PDF parsing, which requires some system-level package installations.
|
||||
|
||||
This will require some system-level package installations, e.g., on Mac:
|
||||
On Mac, you can install the necessary packages with the following:
|
||||
|
||||
```
|
||||
```shell
|
||||
brew install tesseract poppler
|
||||
```
|
||||
|
||||
## Chroma
|
||||
|
||||
[Chroma](https://python.langchain.com/docs/integrations/vectorstores/chroma) is an open-source vector database.
|
||||
## Usage
|
||||
|
||||
This template will create and add documents to the vector database in `chain.py`.
|
||||
To use this package, you should first have the LangChain CLI installed:
|
||||
|
||||
These documents can be loaded from [many sources](https://python.langchain.com/docs/integrations/document_loaders).
|
||||
```shell
|
||||
pip install -U "langchain-cli[serve]"
|
||||
```
|
||||
|
||||
## LLM
|
||||
To create a new LangChain project and install this as the only package, you can do:
|
||||
|
||||
Be sure that `OPENAI_API_KEY` is set in order to the OpenAI models.
|
||||
```shell
|
||||
langchain app new my-app --package rag-semi-structured
|
||||
```
|
||||
|
||||
## Adding the template
|
||||
If you want to add this to an existing project, you can just run:
|
||||
|
||||
Create your LangServe app:
|
||||
```
|
||||
langchain app new my-app
|
||||
cd my-app
|
||||
```shell
|
||||
langchain app add rag-semi-structured
|
||||
```
|
||||
|
||||
Add template:
|
||||
```
|
||||
langchain app add rag-semi-structured
|
||||
And add the following code to your `server.py` file:
|
||||
```python
|
||||
from rag_semi_structured import chain as rag_semi_structured_chain
|
||||
|
||||
add_routes(app, rag_semi_structured_chain, path="/rag-semi-structured")
|
||||
```
|
||||
|
||||
Start server:
|
||||
(Optional) Let's now configure LangSmith.
|
||||
LangSmith will help us trace, monitor and debug LangChain applications.
|
||||
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
|
||||
If you don't have access, you can skip this section
|
||||
|
||||
```shell
|
||||
export LANGCHAIN_TRACING_V2=true
|
||||
export LANGCHAIN_API_KEY=<your-api-key>
|
||||
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
|
||||
```
|
||||
|
||||
If you are inside this directory, then you can spin up a LangServe instance directly by:
|
||||
|
||||
```shell
|
||||
langchain serve
|
||||
```
|
||||
|
||||
See Jupyter notebook `rag_semi_structured` for various way to connect to the template.
|
||||
This will start the FastAPI app with a server is running locally at
|
||||
[http://localhost:8000](http://localhost:8000)
|
||||
|
||||
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
|
||||
We can access the playground at [http://127.0.0.1:8000/rag-semi-structured/playground](http://127.0.0.1:8000/rag-semi-structured/playground)
|
||||
|
||||
We can access the template from code with:
|
||||
|
||||
```python
|
||||
from langserve.client import RemoteRunnable
|
||||
|
||||
runnable = RemoteRunnable("http://localhost:8000/rag-semi-structured")
|
||||
```
|
||||
|
||||
For more details on how to connect to the template, refer to the Jupyter notebook `rag_semi_structured`.
|
@ -1,16 +1,71 @@
|
||||
# RAG Weaviate
|
||||
|
||||
This template performs RAG using Weaviate and OpenAI.
|
||||
# rag-weaviate
|
||||
|
||||
## Weaviate
|
||||
This template performs RAG with Weaviate.
|
||||
|
||||
This connects to a hosted Weaviate vectorstore.
|
||||
## Environment Setup
|
||||
|
||||
Be sure that you have set a few env variables in `chain.py`:
|
||||
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
|
||||
|
||||
Also, ensure the following environment variables are set:
|
||||
* `WEAVIATE_ENVIRONMENT`
|
||||
* `WEAVIATE_API_KEY`
|
||||
|
||||
## LLM
|
||||
## Usage
|
||||
|
||||
Be sure that `OPENAI_API_KEY` is set in order to use the OpenAI models.
|
||||
To use this package, you should first have the LangChain CLI installed:
|
||||
|
||||
```shell
|
||||
pip install -U "langchain-cli[serve]"
|
||||
```
|
||||
|
||||
To create a new LangChain project and install this as the only package, you can do:
|
||||
|
||||
```shell
|
||||
langchain app new my-app --package rag-weaviate
|
||||
```
|
||||
|
||||
If you want to add this to an existing project, you can just run:
|
||||
|
||||
```shell
|
||||
langchain app add rag-weaviate
|
||||
```
|
||||
|
||||
And add the following code to your `server.py` file:
|
||||
```python
|
||||
from rag_weaviate import chain as rag_weaviate_chain
|
||||
|
||||
add_routes(app, rag_weaviate_chain, path="/rag-weaviate")
|
||||
```
|
||||
|
||||
(Optional) Let's now configure LangSmith.
|
||||
LangSmith will help us trace, monitor and debug LangChain applications.
|
||||
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
|
||||
If you don't have access, you can skip this section
|
||||
|
||||
|
||||
```shell
|
||||
export LANGCHAIN_TRACING_V2=true
|
||||
export LANGCHAIN_API_KEY=<your-api-key>
|
||||
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
|
||||
```
|
||||
|
||||
If you are inside this directory, then you can spin up a LangServe instance directly by:
|
||||
|
||||
```shell
|
||||
langchain serve
|
||||
```
|
||||
|
||||
This will start the FastAPI app with a server is running locally at
|
||||
[http://localhost:8000](http://localhost:8000)
|
||||
|
||||
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
|
||||
We can access the playground at [http://127.0.0.1:8000/rag-weaviate/playground](http://127.0.0.1:8000/rag-weaviate/playground)
|
||||
|
||||
We can access the template from code with:
|
||||
|
||||
```python
|
||||
from langserve.client import RemoteRunnable
|
||||
|
||||
runnable = RemoteRunnable("http://localhost:8000/rag-weaviate")
|
||||
```
|
||||
|
@ -1,7 +1,66 @@
|
||||
# Rewrite-Retrieve-Read
|
||||
|
||||
**Rewrite-Retrieve-Read** is a method proposed in the paper [Query Rewriting for Retrieval-Augmented Large Language Models](https://arxiv.org/pdf/2305.14283.pdf)
|
||||
# rewrite_retrieve_read
|
||||
|
||||
> Because the original query can not be always optimal to retrieve for the LLM, especially in the real world... we first prompt an LLM to rewrite the queries, then conduct retrieval-augmented reading
|
||||
This template implemenets a method for query transformation (re-writing) in the paper [Query Rewriting for Retrieval-Augmented Large Language Models](https://arxiv.org/pdf/2305.14283.pdf) to optimize for RAG.
|
||||
|
||||
We show how you can easily do that with LangChain Expression Language
|
||||
## Environment Setup
|
||||
|
||||
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
|
||||
|
||||
## Usage
|
||||
|
||||
To use this package, you should first have the LangChain CLI installed:
|
||||
|
||||
```shell
|
||||
pip install -U "langchain-cli[serve]"
|
||||
```
|
||||
|
||||
To create a new LangChain project and install this as the only package, you can do:
|
||||
|
||||
```shell
|
||||
langchain app new my-app --package rewrite_retrieve_read
|
||||
```
|
||||
|
||||
If you want to add this to an existing project, you can just run:
|
||||
|
||||
```shell
|
||||
langchain app add rewrite_retrieve_read
|
||||
```
|
||||
|
||||
And add the following code to your `server.py` file:
|
||||
```python
|
||||
from rewrite_retrieve_read.chain import chain as rewrite_retrieve_read_chain
|
||||
|
||||
add_routes(app, rewrite_retrieve_read_chain, path="/rewrite-retrieve-read")
|
||||
```
|
||||
|
||||
(Optional) Let's now configure LangSmith.
|
||||
LangSmith will help us trace, monitor and debug LangChain applications.
|
||||
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
|
||||
If you don't have access, you can skip this section
|
||||
|
||||
```shell
|
||||
export LANGCHAIN_TRACING_V2=true
|
||||
export LANGCHAIN_API_KEY=<your-api-key>
|
||||
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
|
||||
```
|
||||
|
||||
If you are inside this directory, then you can spin up a LangServe instance directly by:
|
||||
|
||||
```shell
|
||||
langchain serve
|
||||
```
|
||||
|
||||
This will start the FastAPI app with a server is running locally at
|
||||
[http://localhost:8000](http://localhost:8000)
|
||||
|
||||
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
|
||||
We can access the playground at [http://127.0.0.1:8000/rewrite_retrieve_read/playground](http://127.0.0.1:8000/rewrite_retrieve_read/playground)
|
||||
|
||||
We can access the template from code with:
|
||||
|
||||
```python
|
||||
from langserve.client import RemoteRunnable
|
||||
|
||||
runnable = RemoteRunnable("http://localhost:8000/rewrite_retrieve_read")
|
||||
```
|
||||
|
@ -1,21 +1,73 @@
|
||||
# SQL with LLaMA2
|
||||
|
||||
This template allows you to chat with a SQL database in natural language using LLaMA2.
|
||||
# sql-llama2
|
||||
|
||||
It is configured to use [Replicate](https://python.langchain.com/docs/integrations/llms/replicate).
|
||||
This template enables a user to interact with a SQL database using natural language.
|
||||
|
||||
But, it can be adapted to any API that support LLaMA2, including [Fireworks](https://python.langchain.com/docs/integrations/chat/fireworks) and others.
|
||||
It uses LLamA2-13b hosted by [Replicate](https://python.langchain.com/docs/integrations/llms/replicate), but can be adapted to any API that supports LLaMA2 including [Fireworks](https://python.langchain.com/docs/integrations/chat/fireworks).
|
||||
|
||||
See related templates `sql-ollama` and `sql-llamacpp` for private, local chat with SQL.
|
||||
The template includes an example database of 2023 NBA rosters.
|
||||
|
||||
## Set up SQL DB
|
||||
For more information on how to build this database, see [here](https://github.com/facebookresearch/llama-recipes/blob/main/demo_apps/StructuredLlama.ipynb).
|
||||
|
||||
This template includes an example DB of 2023 NBA rosters.
|
||||
## Environment Setup
|
||||
|
||||
You can see instructions to build this DB [here](https://github.com/facebookresearch/llama-recipes/blob/main/demo_apps/StructuredLlama.ipynb).
|
||||
Ensure the `REPLICATE_API_TOKEN` is set in your environment.
|
||||
|
||||
## LLM
|
||||
## Usage
|
||||
|
||||
This template will use a `Replicate` [hosted version](https://replicate.com/meta/llama-2-13b-chat/versions/f4e2de70d66816a838a89eeeb621910adffb0dd0baba3976c96980970978018d) of LLaMA2.
|
||||
To use this package, you should first have the LangChain CLI installed:
|
||||
|
||||
Be sure that `REPLICATE_API_TOKEN` is set in your environment.
|
||||
```shell
|
||||
pip install -U "langchain-cli[serve]"
|
||||
```
|
||||
|
||||
To create a new LangChain project and install this as the only package, you can do:
|
||||
|
||||
```shell
|
||||
langchain app new my-app --package sql-llama2
|
||||
```
|
||||
|
||||
If you want to add this to an existing project, you can just run:
|
||||
|
||||
```shell
|
||||
langchain app add sql-llama2
|
||||
```
|
||||
|
||||
And add the following code to your `server.py` file:
|
||||
```python
|
||||
from sql_llama2 import chain as sql_llama2_chain
|
||||
|
||||
add_routes(app, sql_llama2_chain, path="/sql-llama2")
|
||||
```
|
||||
|
||||
(Optional) Let's now configure LangSmith.
|
||||
LangSmith will help us trace, monitor and debug LangChain applications.
|
||||
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
|
||||
If you don't have access, you can skip this section
|
||||
|
||||
|
||||
```shell
|
||||
export LANGCHAIN_TRACING_V2=true
|
||||
export LANGCHAIN_API_KEY=<your-api-key>
|
||||
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
|
||||
```
|
||||
|
||||
If you are inside this directory, then you can spin up a LangServe instance directly by:
|
||||
|
||||
```shell
|
||||
langchain serve
|
||||
```
|
||||
|
||||
This will start the FastAPI app with a server is running locally at
|
||||
[http://localhost:8000](http://localhost:8000)
|
||||
|
||||
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
|
||||
We can access the playground at [http://127.0.0.1:8000/sql-llama2/playground](http://127.0.0.1:8000/sql-llama2/playground)
|
||||
|
||||
We can access the template from code with:
|
||||
|
||||
```python
|
||||
from langserve.client import RemoteRunnable
|
||||
|
||||
runnable = RemoteRunnable("http://localhost:8000/sql-llama2")
|
||||
```
|
||||
|
@ -1,18 +1,78 @@
|
||||
# SQL with LLaMA2 using llama.cpp
|
||||
# sql-ollama
|
||||
|
||||
This template allows you to chat with a SQL database in natural language in private, using an open source LLM.
|
||||
This template enables a user to interact with a SQL database using natural language.
|
||||
|
||||
## Set up Ollama
|
||||
It uses [Zephyr-7b](https://huggingface.co/HuggingFaceH4/zephyr-7b-alpha) via [Ollama](https://ollama.ai/library/zephyr) to run inference locally on a Mac laptop.
|
||||
|
||||
Follow instructions [here](https://python.langchain.com/docs/integrations/chat/ollama) to download Ollama.
|
||||
## Environment Setup
|
||||
|
||||
Also follow instructions to download your LLM of interest:
|
||||
Before using this template, you need to set up Ollama and SQL database.
|
||||
|
||||
* This template uses `llama2:13b-chat`
|
||||
* But you can pick from many LLMs [here](https://ollama.ai/library)
|
||||
1. Follow instructions [here](https://python.langchain.com/docs/integrations/chat/ollama) to download Ollama.
|
||||
|
||||
## Set up SQL DB
|
||||
2. Download your LLM of interest:
|
||||
|
||||
This template includes an example DB of 2023 NBA rosters.
|
||||
* This package uses `zephyr`: `ollama pull zephyr`
|
||||
* You can choose from many LLMs [here](https://ollama.ai/library)
|
||||
|
||||
You can see instructions to build this DB [here](https://github.com/facebookresearch/llama-recipes/blob/main/demo_apps/StructuredLlama.ipynb).
|
||||
3. This package includes an example DB of 2023 NBA rosters. You can see instructions to build this DB [here](https://github.com/facebookresearch/llama-recipes/blob/main/demo_apps/StructuredLlama.ipynb).
|
||||
|
||||
## Usage
|
||||
|
||||
To use this package, you should first have the LangChain CLI installed:
|
||||
|
||||
```shell
|
||||
pip install -U "langchain-cli[serve]"
|
||||
```
|
||||
|
||||
To create a new LangChain project and install this as the only package, you can do:
|
||||
|
||||
```shell
|
||||
langchain app new my-app --package sql-ollama
|
||||
```
|
||||
|
||||
If you want to add this to an existing project, you can just run:
|
||||
|
||||
```shell
|
||||
langchain app add sql-ollama
|
||||
```
|
||||
|
||||
And add the following code to your `server.py` file:
|
||||
|
||||
```python
|
||||
from sql_ollama import chain as sql_ollama_chain
|
||||
|
||||
add_routes(app, sql_ollama_chain, path="/sql-ollama")
|
||||
```
|
||||
|
||||
(Optional) Let's now configure LangSmith.
|
||||
LangSmith will help us trace, monitor and debug LangChain applications.
|
||||
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
|
||||
If you don't have access, you can skip this section
|
||||
|
||||
|
||||
```shell
|
||||
export LANGCHAIN_TRACING_V2=true
|
||||
export LANGCHAIN_API_KEY=<your-api-key>
|
||||
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
|
||||
```
|
||||
|
||||
If you are inside this directory, then you can spin up a LangServe instance directly by:
|
||||
|
||||
```shell
|
||||
langchain serve
|
||||
```
|
||||
|
||||
This will start the FastAPI app with a server is running locally at
|
||||
[http://localhost:8000](http://localhost:8000)
|
||||
|
||||
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
|
||||
We can access the playground at [http://127.0.0.1:8000/sql-ollama/playground](http://127.0.0.1:8000/sql-ollama/playground)
|
||||
|
||||
We can access the template from code with:
|
||||
|
||||
```python
|
||||
from langserve.client import RemoteRunnable
|
||||
|
||||
runnable = RemoteRunnable("http://localhost:8000/sql-ollama")
|
||||
```
|
@ -1,9 +1,71 @@
|
||||
# Step-Back Prompting (Question-Answering)
|
||||
# stepback-qa-prompting
|
||||
|
||||
One prompting technique called "Step-Back" prompting can improve performance on complex questions by first asking a "step back" question. This can be combined with regular question-answering applications by then doing retrieval on both the original and step-back question.
|
||||
This template replicates the "Step-Back" prompting technique that improves performance on complex questions by first asking a "step back" question.
|
||||
|
||||
Read the paper [here](https://arxiv.org/abs/2310.06117)
|
||||
This technique can be combined with regular question-answering applications by doing retrieval on both the original and step-back question.
|
||||
|
||||
See an excelent blog post on this by Cobus Greyling [here](https://cobusgreyling.medium.com/a-new-prompt-engineering-technique-has-been-introduced-called-step-back-prompting-b00e8954cacb)
|
||||
Read more about this in the paper [here](https://arxiv.org/abs/2310.06117) and an excellent blog post by Cobus Greyling [here](https://cobusgreyling.medium.com/a-new-prompt-engineering-technique-has-been-introduced-called-step-back-prompting-b00e8954cacb)
|
||||
|
||||
In this template we will replicate this technique. We modify the prompts used slightly to work better with chat models.
|
||||
We will modify the prompts slightly to work better with chat models in this template.
|
||||
|
||||
## Environment Setup
|
||||
|
||||
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
|
||||
|
||||
## Usage
|
||||
|
||||
To use this package, you should first have the LangChain CLI installed:
|
||||
|
||||
```shell
|
||||
pip install -U "langchain-cli[serve]"
|
||||
```
|
||||
|
||||
To create a new LangChain project and install this as the only package, you can do:
|
||||
|
||||
```shell
|
||||
langchain app new my-app --package stepback-qa-prompting
|
||||
```
|
||||
|
||||
If you want to add this to an existing project, you can just run:
|
||||
|
||||
```shell
|
||||
langchain app add stepback-qa-prompting
|
||||
```
|
||||
|
||||
And add the following code to your `server.py` file:
|
||||
```python
|
||||
from stepback_qa_prompting import chain as stepback_qa_prompting_chain
|
||||
|
||||
add_routes(app, stepback_qa_prompting_chain, path="/stepback-qa-prompting")
|
||||
```
|
||||
|
||||
(Optional) Let's now configure LangSmith.
|
||||
LangSmith will help us trace, monitor and debug LangChain applications.
|
||||
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
|
||||
If you don't have access, you can skip this section
|
||||
|
||||
```shell
|
||||
export LANGCHAIN_TRACING_V2=true
|
||||
export LANGCHAIN_API_KEY=<your-api-key>
|
||||
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
|
||||
```
|
||||
|
||||
If you are inside this directory, then you can spin up a LangServe instance directly by:
|
||||
|
||||
```shell
|
||||
langchain serve
|
||||
```
|
||||
|
||||
This will start the FastAPI app with a server running locally at
|
||||
[http://localhost:8000](http://localhost:8000)
|
||||
|
||||
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
|
||||
We can access the playground at [http://127.0.0.1:8000/stepback-qa-prompting/playground](http://127.0.0.1:8000/stepback-qa-prompting/playground)
|
||||
|
||||
We can access the template from code with:
|
||||
|
||||
```python
|
||||
from langserve.client import RemoteRunnable
|
||||
|
||||
runnable = RemoteRunnable("http://localhost:8000/stepback-qa-prompting")
|
||||
```
|
@ -1,16 +1,70 @@
|
||||
# Summarize documents with Anthropic
|
||||
|
||||
This template uses Anthropic's `Claude2` to summarize documents.
|
||||
# summarize-anthropic
|
||||
|
||||
To do this, we can use various prompts from LangChain hub, such as:
|
||||
This template uses Anthropic's `Claude2` to summarize long documents.
|
||||
|
||||
* [This fun summarization prompt](https://smith.langchain.com/hub/hwchase17/anthropic-paper-qa)
|
||||
* [Chain of density summarization prompt](https://smith.langchain.com/hub/lawwu/chain_of_density)
|
||||
It leverages a large context window of 100k tokens, allowing for summarization of documents over 100 pages.
|
||||
|
||||
`Claude2` has a large (100k token) context window, allowing us to summarize documents over 100 pages.
|
||||
You can see the summarization prompt in `chain.py`.
|
||||
|
||||
## LLM
|
||||
## Environment Setup
|
||||
|
||||
This template will use `Claude2` by default.
|
||||
Set the `ANTHROPIC_API_KEY` environment variable to access the Anthropic models.
|
||||
|
||||
Be sure that `ANTHROPIC_API_KEY` is set in your enviorment.
|
||||
## Usage
|
||||
|
||||
To use this package, you should first have the LangChain CLI installed:
|
||||
|
||||
```shell
|
||||
pip install -U "langchain-cli[serve]"
|
||||
```
|
||||
|
||||
To create a new LangChain project and install this as the only package, you can do:
|
||||
|
||||
```shell
|
||||
langchain app new my-app --package summarize-anthropic
|
||||
```
|
||||
|
||||
If you want to add this to an existing project, you can just run:
|
||||
|
||||
```shell
|
||||
langchain app add summarize-anthropic
|
||||
```
|
||||
|
||||
And add the following code to your `server.py` file:
|
||||
```python
|
||||
from summarize_anthropic import chain as summarize_anthropic_chain
|
||||
|
||||
add_routes(app, summarize_anthropic_chain, path="/summarize-anthropic")
|
||||
```
|
||||
|
||||
(Optional) Let's now configure LangSmith.
|
||||
LangSmith will help us trace, monitor and debug LangChain applications.
|
||||
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
|
||||
If you don't have access, you can skip this section
|
||||
|
||||
```shell
|
||||
export LANGCHAIN_TRACING_V2=true
|
||||
export LANGCHAIN_API_KEY=<your-api-key>
|
||||
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
|
||||
```
|
||||
|
||||
If you are inside this directory, then you can spin up a LangServe instance directly by:
|
||||
|
||||
```shell
|
||||
langchain serve
|
||||
```
|
||||
|
||||
This will start the FastAPI app with a server is running locally at
|
||||
[http://localhost:8000](http://localhost:8000)
|
||||
|
||||
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
|
||||
We can access the playground at [http://127.0.0.1:8000/summarize-anthropic/playground](http://127.0.0.1:8000/summarize-anthropic/playground)
|
||||
|
||||
We can access the template from code with:
|
||||
|
||||
```python
|
||||
from langserve.client import RemoteRunnable
|
||||
|
||||
runnable = RemoteRunnable("http://localhost:8000/summarize-anthropic")
|
||||
```
|
||||
|
@ -1,17 +1,69 @@
|
||||
# XML Agent
|
||||
|
||||
This template creates an agent that uses XML syntax to communicate its decisions of what actions to take.
|
||||
For this example, we use Anthropic since Anthropic's Claude models are particularly good at writing XML syntax.
|
||||
This example creates an agent that can optionally look up things on the internet using You.com's retriever.
|
||||
# xml-agent
|
||||
|
||||
## LLM
|
||||
This package creates an agent that uses XML syntax to communicate its decisions of what actions to take. It uses Anthropic's Claude models for writing XML syntax and can optionally look up things on the internet using DuckDuckGo.
|
||||
|
||||
This template will use `Anthropic` by default.
|
||||
## Environment Setup
|
||||
|
||||
Be sure that `ANTHROPIC_API_KEY` is set in your environment.
|
||||
Two environment variables need to be set:
|
||||
|
||||
## Tools
|
||||
- `ANTHROPIC_API_KEY`: Required for using Anthropic
|
||||
|
||||
This template will use `You.com` by default.
|
||||
## Usage
|
||||
|
||||
Be sure that `YDC_API_KEY` is set in your environment.
|
||||
To use this package, you should first have the LangChain CLI installed:
|
||||
|
||||
```shell
|
||||
pip install -U "langchain-cli[serve]"
|
||||
```
|
||||
|
||||
To create a new LangChain project and install this as the only package, you can do:
|
||||
|
||||
```shell
|
||||
langchain app new my-app --package xml-agent
|
||||
```
|
||||
|
||||
If you want to add this to an existing project, you can just run:
|
||||
|
||||
```shell
|
||||
langchain app add xml-agent
|
||||
```
|
||||
|
||||
And add the following code to your `server.py` file:
|
||||
```python
|
||||
from xml_agent import chain as xml_agent_chain
|
||||
|
||||
add_routes(app, xml_agent_chain, path="/xml-agent")
|
||||
```
|
||||
|
||||
(Optional) Let's now configure LangSmith.
|
||||
LangSmith will help us trace, monitor and debug LangChain applications.
|
||||
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
|
||||
If you don't have access, you can skip this section
|
||||
|
||||
|
||||
```shell
|
||||
export LANGCHAIN_TRACING_V2=true
|
||||
export LANGCHAIN_API_KEY=<your-api-key>
|
||||
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
|
||||
```
|
||||
|
||||
If you are inside this directory, then you can spin up a LangServe instance directly by:
|
||||
|
||||
```shell
|
||||
langchain serve
|
||||
```
|
||||
|
||||
This will start the FastAPI app with a server is running locally at
|
||||
[http://localhost:8000](http://localhost:8000)
|
||||
|
||||
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
|
||||
We can access the playground at [http://127.0.0.1:8000/xml-agent/playground](http://127.0.0.1:8000/xml-agent/playground)
|
||||
|
||||
We can access the template from code with:
|
||||
|
||||
```python
|
||||
from langserve.client import RemoteRunnable
|
||||
|
||||
runnable = RemoteRunnable("http://localhost:8000/xml-agent")
|
||||
```
|
||||
|
Loading…
Reference in New Issue