Readme rewrite (#12615)

Co-authored-by: Lance Martin <lance@langchain.dev>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
pull/12622/head
Erick Friis 7 months ago committed by GitHub
parent 00766c9f31
commit a1fae1fddd
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

@ -71,7 +71,7 @@ def new(
readme = destination_dir / "README.md"
readme_contents = readme.read_text()
readme.write_text(
readme_contents.replace("__package_name_last__", package_name).replace(
readme_contents.replace("__package_name__", package_name).replace(
"__app_route_code__", app_route_code
)
)

@ -1,4 +1,4 @@
# __package_name_last__
# __package_name__
TODO: What does this package do

@ -1,3 +1,69 @@
# anthropic-iterative-search
Heavily inspired by [this notebook](https://github.com/anthropics/anthropic-cookbook/blob/main/long_context/wikipedia-search-cookbook.ipynb)
This template will create a virtual research assistant with the ability to search Wikipedia to find answers to your questions.
It is heavily inspired by [this notebook](https://github.com/anthropics/anthropic-cookbook/blob/main/long_context/wikipedia-search-cookbook.ipynb).
## Environment Setup
Set the `ANTHROPIC_API_KEY` environment variable to access the Anthropic models.
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U "langchain-cli[serve]"
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package anthropic-iterative-search
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add anthropic-iterative-search
```
And add the following code to your `server.py` file:
```python
from anthropic_iterative_search import chain as anthropic_iterative_search_chain
add_routes(app, anthropic_iterative_search_chain, path="/anthropic-iterative-search")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/anthropic-iterative-search/playground](http://127.0.0.1:8000/anthropic-iterative-search/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/anthropic-iterative-search")
```

@ -1,45 +1,86 @@
# RAG LangServe chain template
A basic chain template showing the RAG pattern using
a vector store on Astra DB / Apache Cassandra®.
# cassandra-entomology-rag
## Setup:
This template will perform RAG using Astra DB and Apache Cassandra®.
You need:
## Environment Setup
- an [Astra](https://astra.datastax.com) Vector Database (free tier is fine!). **You need a [Database Administrator token](https://awesome-astra.github.io/docs/pages/astra/create-token/#c-procedure)**, in particular the string starting with `AstraCS:...`;
- likewise, get your [Database ID](https://awesome-astra.github.io/docs/pages/astra/faq/#where-should-i-find-a-database-identifier) ready, you will have to enter it below;
- an **OpenAI API Key**. (More info [here](https://cassio.org/start_here/#llm-access), note that out-of-the-box this demo supports OpenAI unless you tinker with the code.)
For the setup, you will require:
- an [Astra](https://astra.datastax.com) Vector Database. You must have a [Database Administrator token](https://awesome-astra.github.io/docs/pages/astra/create-token/#c-procedure), specifically the string starting with `AstraCS:...`.
- [Database ID](https://awesome-astra.github.io/docs/pages/astra/faq/#where-should-i-find-a-database-identifier).
- an **OpenAI API Key**. (More info [here](https://cassio.org/start_here/#llm-access))
_Note:_ you can alternatively use a regular Cassandra cluster: to do so, make sure you provide the `USE_CASSANDRA_CLUSTER` entry as shown in `.env.template` and the subsequent environment variables to specify how to connect to it.
You may also use a regular Cassandra cluster. In this case, provide the `USE_CASSANDRA_CLUSTER` entry as shown in `.env.template` and the subsequent environment variables to specify how to connect to it.
You need to provide the connection parameters and secrets through environment variables. Please refer to `.env.template` for what variables are required.
The connection parameters and secrets must be provided through environment variables. Refer to `.env.template` for the required variables.
### Populate the vector store
## Usage
Make sure you have the environment variables all set (see previous section),
then, from this directory, launch the following just once:
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U "langchain-cli[serve]"
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package cassandra-entomology-rag
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add cassandra-entomology-rag
```
And add the following code to your `server.py` file:
```python
from cassandra_entomology_rag import chain as cassandra_entomology_rag_chain
add_routes(app, cassandra_entomology_rag_chain, path="/cassandra-entomology-rag")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
To populate the vector store, ensure that you have set all the environment variables, then from this directory, execute the following just once:
```shell
poetry run bash -c "cd [...]/cassandra_entomology_rag; python setup.py"
```
The output will be something like `Done (29 lines inserted).`.
> **Note**: In a full application, the vector store might be populated in other ways:
> this step is to pre-populate the vector store with some rows for the
> demo RAG chains to sensibly work.
Note: In a full application, the vector store might be populated in other ways. This step is to pre-populate the vector store with some rows for the demo RAG chains to work sensibly.
### Sample inputs
If you are inside this directory, then you can spin up a LangServe instance directly by:
The chain's prompt is engineered to stay on topic and only use the provided context.
```shell
langchain serve
```
To put this to test, experiment with these example questions:
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
```
"Are there more coleoptera or bugs?"
"Do Odonata have wings?"
"Do birds have wings?" <-- no entomology here!
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/cassandra-entomology-rag/playground](http://127.0.0.1:8000/cassandra-entomology-rag/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/cassandra-entomology-rag")
```
## Reference

@ -1,11 +1,11 @@
# LLM-cache LangServe chain template
A simple chain template showcasing usage of LLM Caching
backed by Astra DB / Apache Cassandra®.
# cassandra-synonym-caching
## Setup:
This template provides a simple chain template showcasing the usage of LLM Caching backed by Astra DB / Apache Cassandra®.
You need:
## Environment Setup
To set up your environment, you will need the following:
- an [Astra](https://astra.datastax.com) Vector Database (free tier is fine!). **You need a [Database Administrator token](https://awesome-astra.github.io/docs/pages/astra/create-token/#c-procedure)**, in particular the string starting with `AstraCS:...`;
- likewise, get your [Database ID](https://awesome-astra.github.io/docs/pages/astra/faq/#where-should-i-find-a-database-identifier) ready, you will have to enter it below;
@ -13,7 +13,64 @@ You need:
_Note:_ you can alternatively use a regular Cassandra cluster: to do so, make sure you provide the `USE_CASSANDRA_CLUSTER` entry as shown in `.env.template` and the subsequent environment variables to specify how to connect to it.
You need to provide the connection parameters and secrets through environment variables. Please refer to `.env.template` for what variables are required.
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U "langchain-cli[serve]"
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package cassandra-synonym-caching
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add cassandra-synonym-caching
```
And add the following code to your `server.py` file:
```python
from cassandra_synonym_caching import chain as cassandra_synonym_caching_chain
add_routes(app, cassandra_synonym_caching_chain, path="/cassandra-synonym-caching")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/cassandra-synonym-caching/playground](http://127.0.0.1:8000/cassandra-synonym-caching/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/cassandra-synonym-caching")
```
## Reference

@ -1,5 +1,69 @@
# csv-agent
This is a csv agent that uses both a Python REPL as well as a vectorstore to allow for interaction with text data.
This template uses a [csv agent](https://python.langchain.com/docs/integrations/toolkits/csv) with tools (Python REPL) and memory (vectorstore) for interaction (question-answering) with text data.
## Environment Setup
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
To set up the environment, the `ingest.py` script should be run to handle the ingestion into a vectorstore.
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U "langchain-cli[serve]"
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package csv-agent
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add csv-agent
```
And add the following code to your `server.py` file:
```python
from csv_agent.agent import chain as csv_agent_chain
add_routes(app, csv_agent_chain, path="/csv-agent")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/csv-agent/playground](http://127.0.0.1:8000/csv-agent/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
Set up that is required is running `ingest.py` to do the ingestion into a vectorstore.
runnable = RemoteRunnable("http://localhost:8000/csv-agent")
```

@ -1,31 +1,86 @@
# elastic-query-generator
We can use LLMs to interact with Elasticsearch analytics databases in natural language.
This chain builds search queries via the Elasticsearch DSL API (filters and aggregations).
The Elasticsearch client must have permissions for index listing, mapping description and search queries.
# elastic-query-generator
This template allows interacting with Elasticsearch analytics databases in natural language using LLMs.
It builds search queries via the Elasticsearch DSL API (filters and aggregations).
## Setup
## Environment Setup
## Installing Elasticsearch
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
There are a number of ways to run Elasticsearch.
### Installing Elasticsearch
### Elastic Cloud
There are a number of ways to run Elasticsearch. However, one recommended way is through Elastic Cloud.
Create a free trial account on [Elastic Cloud](https://cloud.elastic.co/registration?utm_source=langchain&utm_content=langserve).
With a deployment, update the connection string.
Password and connection (elasticsearch url) can be found on the deployment console. Th
Password and connection (elasticsearch url) can be found on the deployment console.
Note that the Elasticsearch client must have permissions for index listing, mapping description, and search queries.
## Populating with data
### Populating with data
If you want to populate the DB with some example info, you can run `python ingest.py`.
This will create a `customers` index.
In the chain, we specify indexes to generate queries against, and we specify `["customers"]`.
This is specific to setting up your Elastic index in this
This will create a `customers` index. In this package, we specify indexes to generate queries against, and we specify `["customers"]`. This is specific to setting up your Elastic index.
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U "langchain-cli[serve]"
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package elastic-query-generator
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add elastic-query-generator
```
And add the following code to your `server.py` file:
```python
from elastic_query_generator.chain import chain as elastic_query_generator_chain
add_routes(app, elastic_query_generator_chain, path="/elastic-query-generator")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/elastic-query-generator/playground](http://127.0.0.1:8000/elastic-query-generator/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/elastic-query-generator")
```

@ -1,14 +1,75 @@
# Extraction with Anthropic Function Calling
This template enables [Anthropic function calling](https://python.langchain.com/docs/integrations/chat/extraction_anthropic_functions).
This is a wrapper around Anthropic's API that uses prompting and output parsing to replicate the OpenAI functions experience.
# extraction-anthropic-functions
Specify the information you want to extract in `chain.py`
This template enables [Anthropic function calling](https://python.langchain.com/docs/integrations/chat/anthropic_functions).
By default, it will extract the title and author of papers.
This can be used for various tasks, such as extraction or tagging.
## LLM
The function output schema can be set in `chain.py`.
This template will use `Claude2` by default.
## Environment Setup
Be sure that `ANTHROPIC_API_KEY` is set in your enviorment.
Set the `ANTHROPIC_API_KEY` environment variable to access the Anthropic models.
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U "langchain-cli[serve]"
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package extraction-anthropic-functions
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add extraction-anthropic-functions
```
And add the following code to your `server.py` file:
```python
from extraction_anthropic_functions import chain as extraction_anthropic_functions_chain
add_routes(app, extraction_anthropic_functions_chain, path="/extraction-anthropic-functions")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/extraction-anthropic-functions/playground](http://127.0.0.1:8000/extraction-anthropic-functions/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/extraction-anthropic-functions")
```
By default, the package will extract the title and author of papers from the information you specify in `chain.py`. This template will use `Claude2` by default.
---

@ -30,34 +30,23 @@
"source": [
"## Run Template\n",
"\n",
"As shown in the README, add template and start server:\n",
"In `server.py`, set -\n",
"```\n",
"langchain app add extraction-anthropic-functions\n",
"langchain serve\n",
"```\n",
"\n",
"We can now look at the endpoints:\n",
"\n",
"http://127.0.0.1:8000/docs#\n",
"\n",
"And specifically at our loaded template:\n",
"\n",
"http://127.0.0.1:8000/docs#/default/invoke_extraction-anthropic-functions_invoke_post\n",
" \n",
"We can also use remote runnable to call it:"
"add_routes(app, chain_ext, path=\"/extraction-anthropic-functions\")\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "92edba86",
"execution_count": null,
"id": "5fd794ec-a002-490e-8eb9-06ce3e6c2f14",
"metadata": {},
"outputs": [],
"source": [
"from langserve.client import RemoteRunnable\n",
"\n",
"anthropic_function_model = RemoteRunnable(\n",
" \"http://localhost:8000/extraction-anthropic-functions\"\n",
" \"http://localhost:8001/extraction-anthropic-functions\"\n",
")\n",
"anthropic_function_model.invoke(text[0].page_content[0:1500])"
]

@ -1,13 +1,72 @@
# Extraction with OpenAI Function Calling
This template shows how to do extraction of structured data from unstructured data, using OpenAI [function calling](https://python.langchain.com/docs/modules/chains/how_to/openai_functions).
# extraction-openai-functions
Specify the information you want to extract in `chain.py`
This template uses [OpenAI function calling](https://python.langchain.com/docs/modules/chains/how_to/openai_functions) for extraction of structured output from unstructured input text.
By default, it will extract the title and author of papers.
The extraction output schema can be set in `chain.py`.
## LLM
## Environment Setup
This template will use `OpenAI` by default.
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
Be sure that `OPENAI_API_KEY` is set in your environment.
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U "langchain-cli[serve]"
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package extraction-openai-functions
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add extraction-openai-functions
```
And add the following code to your `server.py` file:
```python
from extraction_openai_functions import chain as extraction_openai_functions_chain
add_routes(app, extraction_openai_functions_chain, path="/extraction-openai-functions")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/extraction-openai-functions/playground](http://127.0.0.1:8000/extraction-openai-functions/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/extraction-openai-functions")
```
By default, this package is set to extract the title and author of papers, as specified in the `chain.py` file.
LLM is leveraged by the OpenAI function by default.

@ -1,9 +1,73 @@
# guardrails-output-parser
Uses [guardrails-ai](https://github.com/guardrails-ai/guardrails) to validate output.
This template uses [guardrails-ai](https://github.com/guardrails-ai/guardrails) to validate LLM output.
The `GuardrailsOutputParser` is set in `chain.py`.
The default example protects against profanity.
## Environment Setup
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U "langchain-cli[serve]"
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package guardrails-output-parser
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add guardrails-output-parser
```
And add the following code to your `server.py` file:
```python
from guardrails_output_parser import chain as guardrails_output_parser_chain
add_routes(app, guardrails_output_parser_chain, path="/guardrails-output-parser")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/guardrails-output-parser/playground](http://127.0.0.1:8000/guardrails-output-parser/playground)
We can access the template from code with:
This example protects against profanity, but with Guardrails you can protect against a multitude of things.
```python
from langserve.client import RemoteRunnable
If Guardrails does not find any profanity, then the translated output is returned as is.
runnable = RemoteRunnable("http://localhost:8000/guardrails-output-parser")
```
If Guardrails does find profanity, then an empty string is returned.
If Guardrails does not find any profanity, then the translated output is returned as is. If Guardrails does find profanity, then an empty string is returned.

@ -1,5 +1,5 @@
[tool.poetry]
name = "guardrails_output_parser"
name = "guardrails-output-parser"
version = "0.0.1"
description = ""
authors = []

@ -1,9 +1,76 @@
# HyDE
Hypothetical Document Embeddings (HyDE) are a method to improve retrieval.
To do this, a hypothetical document is generated for an incoming query.
That document is then embedded, and that embedding is used to look up real documents similar to that hypothetical document.
The idea behind this is that the hypothetical document may be closer in the embedding space than the query.
For a more detailed description, read the full paper [here](https://arxiv.org/abs/2212.10496).
# hyde
This template HyDE with RAG.
Hyde is a retrieval method that stands for Hypothetical Document Embeddings (HyDE). It is a method used to enhance retrieval by generating a hypothetical document for an incoming query.
The document is then embedded, and that embedding is utilized to look up real documents that are similar to the hypothetical document.
The underlying concept is that the hypothetical document may be closer in the embedding space than the query.
For a more detailed description, see the paper [here](https://arxiv.org/abs/2212.10496).
## Environment Setup
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U "langchain-cli[serve]"
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package hyde
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add hyde
```
And add the following code to your `server.py` file:
```python
from hyde.chain import chain as hyde_chain
add_routes(app, hyde_chain, path="/hyde")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/hyde/playground](http://127.0.0.1:8000/hyde/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/hyde")
```
For this example, we use a simple RAG architecture, although you can easily use this technique in other more complicated architectures.

@ -1,18 +1,70 @@
# Extraction with LLaMA2 Function Calling
This template shows how to do extraction of structured data from unstructured data, using LLaMA2 [fine-tuned for grammars and jsonschema](https://replicate.com/andreasjansson/llama-2-13b-chat-gguf).
# llama2-functions
[Query transformations](https://blog.langchain.dev/query-transformations/) are one great application area for open source, private LLMs:
This template performs extraction of structured data from unstructured data using a [LLaMA2 model that supports a specified JSON output schema](https://github.com/ggerganov/llama.cpp/blob/master/grammars/README.md).
* The tasks are often narrow and well-defined (e.g., generatae multiple questions from a user input)
* They also are tasks that users may want to run locally (e.g., in a RAG workflow)
The extraction schema can be set in `chain.py`.
Specify the scehma you want to extract in `chain.py`
## Environment Setup
## LLM
This will use a [LLaMA2-13b model hosted by Replicate](https://replicate.com/andreasjansson/llama-2-13b-chat-gguf/versions).
This template will use a `Replicate` [hosted version](https://replicate.com/andreasjansson/llama-2-13b-chat-gguf) of LLaMA2 that has support for grammars and jsonschema.
Ensure that `REPLICATE_API_TOKEN` is set in your environment.
Based on the `Replicate` example, the JSON schema is supplied directly in the prompt.
## Usage
Be sure that `REPLICATE_API_TOKEN` is set in your environment.
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U "langchain-cli[serve]"
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package llama2-functions
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add llama2-functions
```
And add the following code to your `server.py` file:
```python
from llama2_functions import chain as llama2_functions_chain
add_routes(app, llama2_functions_chain, path="/llama2-functions")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/llama2-functions/playground](http://127.0.0.1:8000/llama2-functions/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/llama2-functions")
```

@ -1,24 +1,17 @@
# Neo4j Knowledge Graph: Enhanced mapping from text to database using a full-text index
This template allows you to chat with Neo4j graph database in natural language, using an OpenAI LLM.
Its primary purpose is to convert a natural language question into a Cypher query (which is used to query Neo4j databases),
execute the query, and then provide a natural language response based on the query's results.
The addition of the full-text index ensures efficient mapping of values from text to database for more precise Cypher statement generation.
In this example, full-text index is used to map names of people and movies from the user's query with corresponding database entries.
# neo4j-cypher-ft
## Neo4j database
This template allows you to interact with a Neo4j graph database using natural language, leveraging OpenAI's LLM.
There are a number of ways to set up a Neo4j database.
Its main function is to convert natural language questions into Cypher queries (the language used to query Neo4j databases), execute these queries, and provide natural language responses based on the query's results.
### Neo4j Aura
The package utilizes a full-text index for efficient mapping of text values to database entries, thereby enhancing the generation of accurate Cypher statements.
Neo4j AuraDB is a fully managed cloud graph database service.
Create a free instance on [Neo4j Aura](https://neo4j.com/cloud/platform/aura-graph-database?utm_source=langchain&utm_content=langserve).
When you initiate a free database instance, you'll receive credentials to access the database.
In the provided example, the full-text index is used to map names of people and movies from the user's query to corresponding database entries.
## Environment variables
## Environment Setup
You need to define the following environment variables
The following environment variables need to be set:
```
OPENAI_API_KEY=<YOUR_OPENAI_API_KEY>
@ -27,16 +20,65 @@ NEO4J_USERNAME=<YOUR_NEO4J_USERNAME>
NEO4J_PASSWORD=<YOUR_NEO4J_PASSWORD>
```
## Populating with data
Additionally, if you wish to populate the DB with some example data, you can run `python ingest.py`.
This script will populate the database with sample movie data and create a full-text index named `entity`, which is used to map person and movies from user input to database values for precise Cypher statement generation.
If you want to populate the DB with some example data, you can run `python ingest.py`.
This script will populate the database with sample movie data.
Additionally, it will create an full-text index named `entity`, which is used to
map person and movies from user input to database values for precise Cypher statement generation.
## Installation
## Usage
```bash
# from inside your LangServe instance
poe add neo4j-cypher-ft
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U "langchain-cli[serve]"
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package neo4j-cypher-ft
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add neo4j-cypher-ft
```
And add the following code to your `server.py` file:
```python
from neo4j_cypher_ft import chain as neo4j_cypher_ft_chain
add_routes(app, neo4j_cypher_ft_chain, path="/neo4j-cypher-ft")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/neo4j-cypher-ft/playground](http://127.0.0.1:8000/neo4j-cypher-ft/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/neo4j-cypher-ft")
```

@ -1,22 +1,13 @@
# Neo4j Knowledge Graph with OpenAI LLMs
This template allows you to chat with Neo4j graph database in natural language, using an OpenAI LLM.
Its primary purpose is to convert a natural language question into a Cypher query (which is used to query Neo4j databases),
execute the query, and then provide a natural language response based on the query's results.
# neo4j_cypher
## Neo4j database
This template allows you to interact with a Neo4j graph database in natural language, using an OpenAI LLM.
There are a number of ways to set up a Neo4j database.
### Neo4j Aura
Neo4j AuraDB is a fully managed cloud graph database service.
Create a free instance on [Neo4j Aura](https://neo4j.com/cloud/platform/aura-graph-database?utm_source=langchain&utm_content=langserve).
When you initiate a free database instance, you'll receive credentials to access the database.
It transforms a natural language question into a Cypher query (used to fetch data from Neo4j databases), executes the query, and provides a natural language response based on the query results.
## Environment variables
## Environment Setup
You need to define the following environment variables
Define the following environment variables:
```
OPENAI_API_KEY=<YOUR_OPENAI_API_KEY>
@ -25,14 +16,75 @@ NEO4J_USERNAME=<YOUR_NEO4J_USERNAME>
NEO4J_PASSWORD=<YOUR_NEO4J_PASSWORD>
```
## Neo4j database setup
There are a number of ways to set up a Neo4j database.
### Neo4j Aura
Neo4j AuraDB is a fully managed cloud graph database service.
Create a free instance on [Neo4j Aura](https://neo4j.com/cloud/platform/aura-graph-database?utm_source=langchain&utm_content=langserve).
When you initiate a free database instance, you'll receive credentials to access the database.
## Populating with data
If you want to populate the DB with some example data, you can run `python ingest.py`.
This script will populate the database with sample movie data.
## Installation
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U "langchain-cli[serve]"
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package neo4j_cypher
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add neo4j_cypher
```
And add the following code to your `server.py` file:
```python
from neo4j_cypher import chain as neo4j_cypher_chain
add_routes(app, neo4j_cypher_chain, path="/neo4j-cypher")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/neo4j_cypher/playground](http://127.0.0.1:8000/neo4j_cypher/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
```bash
# from inside your LangServe instance
poe add neo4j-cypher
runnable = RemoteRunnable("http://localhost:8000/neo4j_cypher")
```

@ -1,5 +1,5 @@
[tool.poetry]
name = "neo4j_cypher"
name = "neo4j-cypher"
version = "0.1.0"
description = ""
authors = ["Tomaz Bratanic <tomaz.bratanic@neo4j.com>"]

@ -1,31 +1,16 @@
# Graph Generation Chain for Neo4j Knowledge Graph
# neo4j-generation
Harness the power of natural language understanding of LLMs and convert plain text into structured knowledge graphs with the Graph Generation Chain.
This chain uses OpenAI's LLM to construct a knowledge graph in Neo4j.
Leveraging OpenAI Functions capabilities, the Graph Generation Chain efficiently extracts structured information from text.
The chain has the following input parameters:
The neo4j-generation template is designed to convert plain text into structured knowledge graphs.
* text (str): The input text from which the information will be extracted to construct the graph.
* allowed_nodes (Optional[List[str]]): A list of node labels to guide the extraction process.
If not provided, extraction won't have specific restriction on node labels.
* allowed_relationships (Optional[List[str]]): A list of relationship types to guide the extraction process.
If not provided, extraction won't have specific restriction on relationship types.
By using OpenAI's language model, it can efficiently extract structured information from text and construct a knowledge graph in Neo4j.
Find more details in [this blog post](https://blog.langchain.dev/constructing-knowledge-graphs-from-text-using-openai-functions/).
This package is flexible and allows users to guide the extraction process by specifying a list of node labels and relationship types.
## Neo4j database
For more details on the functionality and capabilities of this package, please refer to [this blog post](https://blog.langchain.dev/constructing-knowledge-graphs-from-text-using-openai-functions/).
There are a number of ways to set up a Neo4j database.
## Environment Setup
### Neo4j Aura
Neo4j AuraDB is a fully managed cloud graph database service.
Create a free instance on [Neo4j Aura](https://neo4j.com/cloud/platform/aura-graph-database?utm_source=langchain&utm_content=langserve).
When you initiate a free database instance, you'll receive credentials to access the database.
## Environment variables
You need to define the following environment variables
You need to set the following environment variables:
```
OPENAI_API_KEY=<YOUR_OPENAI_API_KEY>
@ -34,11 +19,61 @@ NEO4J_USERNAME=<YOUR_NEO4J_USERNAME>
NEO4J_PASSWORD=<YOUR_NEO4J_PASSWORD>
```
## Installation
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U "langchain-cli[serve]"
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package neo4j-generation
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add neo4j-generation
```
And add the following code to your `server.py` file:
```python
from neo4j_generation import chain as neo4j_generation_chain
To get started with the Graph Generation Chain:
add_routes(app, neo4j_generation_chain, path="/neo4j-generation")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```bash
# from inside your LangServe instance
poe add neo4j-generation
```
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/neo4j-generation/playground](http://127.0.0.1:8000/neo4j-generation/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/neo4j-generation")
```

@ -1,19 +1,11 @@
# Parent Document Retriever with Neo4j Vector Index
This template allows you to balance precise embeddings and context retention by splitting documents into smaller chunks and retrieving their original or larger text information.
Using a Neo4j vector index, the template queries child nodes using vector similarity search and retrieves the corresponding parent's text by defining an appropriate `retrieval_query` parameter.
# neo4j-parent
## Neo4j database
This template allows you to balance precise embeddings and context retention by splitting documents into smaller chunks and retrieving their original or larger text information.
There are a number of ways to set up a Neo4j database.
Using a Neo4j vector index, the package queries child nodes using vector similarity search and retrieves the corresponding parent's text by defining an appropriate `retrieval_query` parameter.
### Neo4j Aura
Neo4j AuraDB is a fully managed cloud graph database service.
Create a free instance on [Neo4j Aura](https://neo4j.com/cloud/platform/aura-graph-database?utm_source=langchain&utm_content=langserve).
When you initiate a free database instance, you'll receive credentials to access the database.
## Environment variables
## Environment Setup
You need to define the following environment variables
@ -32,9 +24,61 @@ First, the text is divided into larger chunks ("parents") and then further subdi
After storing these chunks in the database, embeddings for the child nodes are computed using OpenAI's embeddings and stored back in the graph for future retrieval or analysis.
Additionally, a vector index named `retrieval` is created for efficient querying of these embeddings.
## Installation
```bash
# from inside your LangServe instance
poe add neo4j-parent
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U "langchain-cli[serve]"
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package neo4j-parent
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add neo4j-parent
```
And add the following code to your `server.py` file:
```python
from neo4j_parent import chain as neo4j_parent_chain
add_routes(app, neo4j_parent_chain, path="/neo4j-parent")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/neo4j-parent/playground](http://127.0.0.1:8000/neo4j-parent/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/neo4j-parent")
```

@ -1,16 +1,72 @@
# OpenAI Functions Agent
This template creates an agent that uses OpenAI function calling to communicate its decisions of what actions to take.
This example creates an agent that can optionally look up things on the internet using Tavily's search engine.
# openai-functions-agent
## LLM
This template creates an agent that uses OpenAI function calling to communicate its decisions on what actions to take.
This template will use `OpenAI` by default.
This example creates an agent that can optionally look up information on the internet using Tavily's search engine.
Be sure that `OPENAI_API_KEY` is set in your environment.
## Environment Setup
## Tools
The following environment variables need to be set:
This template will use `Tavily` by default.
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
Be sure that `TAVILY_API_KEY` is set in your environment.
Set the `TAVILY_API_KEY` environment variable to access Tavily.
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U "langchain-cli[serve]"
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package openai-functions-agent
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add openai-functions-agent
```
And add the following code to your `server.py` file:
```python
from openai_functions_agent import chain as openai_functions_agent_chain
add_routes(app, openai_functions_agent_chain, path="/openai-functions-agent")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/openai-functions-agent/playground](http://127.0.0.1:8000/openai-functions-agent/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/openai-functions-agent")
```

@ -1,9 +1,67 @@
# pirate-speak
This simple application converts user input into pirate speak
This template converts user input into pirate speak.
## Environment Setup
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U "langchain-cli[serve]"
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package pirate-speak
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add pirate-speak
```
And add the following code to your `server.py` file:
```python
from pirate_speak import chain as pirate_speak_chain
add_routes(app, pirate_speak_chain, path="/pirate-speak")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/pirate-speak/playground](http://127.0.0.1:8000/pirate-speak/playground)
## LLM
We can access the template from code with:
This template will use `OpenAI` by default.
```python
from langserve.client import RemoteRunnable
Be sure that `OPENAI_API_KEY` is set in your environment.
runnable = RemoteRunnable("http://localhost:8000/pirate-speak")
```

@ -1,5 +1,5 @@
[tool.poetry]
name = "pirate_speak"
name = "pirate-speak"
version = "0.0.1"
description = ""
authors = []

@ -1 +1,68 @@
# plate-chain
This template enables parsing of data from laboratory plates.
In the context of biochemistry or molecular biology, laboratory plates are commonly used tools to hold samples in a grid-like format.
This can parse the resulting data into standardized (e.g., JSON) format for further processing.
## Environment Setup
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
## Usage
To utilize plate-chain, you must have the LangChain CLI installed:
```shell
pip install -U "langchain-cli[serve]"
```
Creating a new LangChain project and installing plate-chain as the only package can be done with:
```shell
langchain app new my-app --package plate-chain
```
If you wish to add this to an existing project, simply run:
```shell
langchain app add plate-chain
```
Then add the following code to your `server.py` file:
```python
from plate_chain import chain as plate_chain_chain
add_routes(app, plate_chain_chain, path="/plate-chain")
```
(Optional) For configuring LangSmith, which helps trace, monitor and debug LangChain applications, use the following code:
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you're in this directory, you can start a LangServe instance directly by:
```shell
langchain serve
```
This starts the FastAPI app with a server running locally at
[http://localhost:8000](http://localhost:8000)
All templates can be viewed at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
Access the playground at [http://127.0.0.1:8000/plate-chain/playground](http://127.0.0.1:8000/plate-chain/playground)
You can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/plate-chain")
```

@ -1,5 +1,5 @@
[tool.poetry]
name = "plate_chain"
name = "plate-chain"
version = "0.0.1"
description = ""
authors = []
@ -17,7 +17,7 @@ fastapi = "^0.104.0"
sse-starlette = "^1.6.5"
[tool.langserve]
export_module = "plate_chain.__init__"
export_module = "plate_chain"
export_attr = "chain"
[build-system]

@ -1,27 +1,78 @@
# RAG AWS Bedrock
AWS Bedrock is a managed serve that offers a set of foundation models.
# rag-aws-bedrock
Here we will use `Anthropic Claude` for text generation and `Amazon Titan` for text embedding.
This template is designed to connect with the AWS Bedrock service, a managed server that offers a set of foundation models.
We will use FAISS as our vectorstore.
It primarily uses the `Anthropic Claude` for text generation and `Amazon Titan` for text embedding, and utilizes FAISS as the vectorstore.
(See [this notebook](https://github.com/aws-samples/amazon-bedrock-workshop/blob/main/03_QuestionAnswering/01_qa_w_rag_claude.ipynb) for additional context on the RAG pipeline.)
For additional context on the RAG pipeline, refer to [this notebook](https://github.com/aws-samples/amazon-bedrock-workshop/blob/main/03_QuestionAnswering/01_qa_w_rag_claude.ipynb).
Code here uses the `boto3` library to connect with the Bedrock service. See [this page](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/quickstart.html#configuration) for setting up and configuring boto3 to work with an AWS account.
## Environment Setup
## FAISS
Before you can use this package, ensure that you have configured `boto3` to work with your AWS account.
You need to install the `faiss-cpu` package to work with the FAISS vector store.
For details on how to set up and configure `boto3`, visit [this page](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/quickstart.html#configuration).
In addition, you need to install the `faiss-cpu` package to work with the FAISS vector store:
```bash
pip install faiss-cpu
```
## LLM and Embeddings
The code assumes that you are working with the `default` AWS profile and `us-east-1` region. If not, specify these environment variables to reflect the correct region and AWS profile.
You should also set the following environment variables to reflect your AWS profile and region (if you're not using the `default` AWS profile and `us-east-1` region):
* `AWS_DEFAULT_REGION`
* `AWS_PROFILE`
## Usage
First, install the LangChain CLI:
```shell
pip install -U "langchain-cli[serve]"
```
To create a new LangChain project and install this as the only package:
```shell
langchain app new my-app --package rag-aws-bedrock
```
To add this package to an existing project:
```shell
langchain app add rag-aws-bedrock
```
Then add the following code to your `server.py` file:
```python
from rag_aws_bedrock import chain as rag_aws_bedrock_chain
add_routes(app, rag_aws_bedrock_chain, path="/rag-aws-bedrock")
```
(Optional) If you have access to LangSmith, you can configure it to trace, monitor, and debug LangChain applications. If you don't have access, you can skip this section.
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server running locally at [http://localhost:8000](http://localhost:8000)
You can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs) and access the playground at [http://127.0.0.1:8000/rag-aws-bedrock/playground](http://127.0.0.1:8000/rag-aws-bedrock/playground).
You can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rag-aws-bedrock")
```

@ -1,21 +1,86 @@
# RAG AWS Kendra
# rag-aws-kendra
[Amazon Kendra](https://aws.amazon.com/kendra/) is an intelligent search service powered by machine learning (ML).
Here we will use `Anthropic Claude` for text generation and `Amazon Kendra` for retrieving documents. Together, with these two services, this application uses a Retrieval chain to answer questions from your documents.
This template is an application that utilizes Amazon Kendra, a machine learning powered search service, and Anthropic Claude for text generation. T
(See [this page](https://aws.amazon.com/blogs/machine-learning/quickly-build-high-accuracy-generative-ai-applications-on-enterprise-data-using-amazon-kendra-langchain-and-large-language-models/) for additional context on building RAG applications with Amazon Kendra.)
he application retrieves documents using a Retrieval chain to answer questions from your documents.
Code here uses the `boto3` library to connect with the Bedrock service. See [this page](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/quickstart.html#configuration) for setting up and configuring boto3 to work with an AWS account.
It uses the `boto3` library to connect with the Bedrock service.
## Kendra Index
For more context on building RAG applications with Amazon Kendra, check [this page](https://aws.amazon.com/blogs/machine-learning/quickly-build-high-accuracy-generative-ai-applications-on-enterprise-data-using-amazon-kendra-langchain-and-large-language-models/).
You will need a Kendra Index setup before using this template. For setting up a sample index, you can use this [Cloudformation template](https://github.com/aws-samples/amazon-kendra-langchain-extensions/blob/main/kendra_retriever_samples/kendra-docs-index.yaml) to create the index. This template includes sample data containing AWS online documentation for Amazon Kendra, Amazon Lex, and Amazon SageMaker. Alternately, if you have an Amazon Kendra index and have indexed your own dataset, you can use that. Launching the stack requires about 30 minutes followed by about 15 minutes to synchronize it and ingest the data in the index. Therefore, wait for about 45 minutes after launching the stack. Note the Index ID and AWS Region on the stacks Outputs tab.
## Environment Setup
## Environment variables
Please ensure to setup and configure `boto3` to work with your AWS account.
The code assumes that you are working with the `default` AWS profile and `us-east-1` region. If not, specify these environment variables to reflect the correct region and AWS profile.
You can follow the guide [here](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/quickstart.html#configuration).
* `AWS_DEFAULT_REGION`
* `AWS_PROFILE`
You should also have a Kendra Index set up before using this template.
This code also requires specifying the `KENDRA_INDEX_ID` env variable which should have the Index ID of the Kendra index. Note that the Index ID is a 36 character alphanumeric value that can be found in the index detail page.
You can use [this Cloudformation template](https://github.com/aws-samples/amazon-kendra-langchain-extensions/blob/main/kendra_retriever_samples/kendra-docs-index.yaml) to create a sample index.
This includes sample data containing AWS online documentation for Amazon Kendra, Amazon Lex, and Amazon SageMaker. Alternatively, you can use your own Amazon Kendra index if you have indexed your own dataset.
The following environment variables need to be set:
* `AWS_DEFAULT_REGION` - This should reflect the correct AWS region. Default is `us-east-1`.
* `AWS_PROFILE` - This should reflect your AWS profile. Default is `default`.
* `KENDRA_INDEX_ID` - This should have the Index ID of the Kendra index. Note that the Index ID is a 36 character alphanumeric value that can be found in the index detail page.
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U "langchain-cli[serve]"
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package rag-aws-kendra
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add rag-aws-kendra
```
And add the following code to your `server.py` file:
```python
from rag_aws_kendra.chain import chain as rag_aws_kendra_chain
add_routes(app, rag_aws_kendra_chain, path="/rag-aws-kendra")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/rag-aws-kendra/playground](http://127.0.0.1:8000/rag-aws-kendra/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rag-aws-kendra")
```

@ -1,5 +1,5 @@
[tool.poetry]
name = "rag_aws_kendra"
name = "rag-aws-kendra"
version = "0.0.1"
description = ""
authors = []

@ -1,29 +1,79 @@
# Private RAG
This template performs privae RAG (no reliance on external APIs) using:
# rag-chroma-private
* Ollama for the LLM
* GPT4All for embeddings
This template performs RAG with no reliance on external APIs.
## LLM
It utilizes Ollama the LLM, GPT4All for embeddings, and Chroma for the vectorstore.
Follow instructions [here](https://python.langchain.com/docs/integrations/chat/ollama) to download Ollama.
The vectorstore is created in `chain.py` and by default indexes a [popular blog posts on Agents](https://lilianweng.github.io/posts/2023-06-23-agent/) for question-answering.
The instructions also show how to download your LLM of interest with Ollama:
## Environment Setup
* This template uses `llama2:7b-chat`
* But you can pick from many [here](https://ollama.ai/library)
To set up the environment, you need to download Ollama.
## Set up local embeddings
Follow the instructions [here](https://python.langchain.com/docs/integrations/chat/ollama).
This will use [GPT4All](https://python.langchain.com/docs/integrations/text_embedding/gpt4all) embeddings.
You can choose the desired LLM with Ollama.
## Chroma
This template uses `llama2:7b-chat`, which can be accessed using `ollama pull llama2:7b-chat`.
[Chroma](https://python.langchain.com/docs/integrations/vectorstores/chroma) is an open-source vector database.
There are many other options available [here](https://ollama.ai/library).
This template will create and add documents to the vector database in `chain.py`.
This package also uses [GPT4All](https://python.langchain.com/docs/integrations/text_embedding/gpt4all) embeddings.
By default, this will load a popular blog post on agents.
## Usage
However, you can choose from a large number of document loaders [here](https://python.langchain.com/docs/integrations/document_loaders).
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U "langchain-cli[serve]"
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package rag-chroma-private
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add rag-chroma-private
```
And add the following code to your `server.py` file:
```python
from rag_chroma_private import chain as rag_chroma_private_chain
add_routes(app, rag_chroma_private_chain, path="/rag-chroma-private")
```
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/). If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/rag-chroma-private/playground](http://127.0.0.1:8000/rag-chroma-private/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rag-chroma-private")
```
The package will create and add documents to the vector database in `chain.py`. By default, it will load a popular blog post on agents. However, you can choose from a large number of document loaders [here](https://python.langchain.com/docs/integrations/document_loaders).

@ -1,15 +1,68 @@
# RAG Chroma
# rag-chroma
This template performs RAG using Chroma and OpenAI.
## Chroma
The vectorstore is created in `chain.py` and by default indexes a [popular blog posts on Agents](https://lilianweng.github.io/posts/2023-06-23-agent/) for question-answering.
## Environment Setup
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U "langchain-cli[serve]"
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package rag-chroma
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add rag-chroma
```
And add the following code to your `server.py` file:
```python
from rag_chroma import chain as rag_chroma_chain
add_routes(app, rag_chroma_chain, path="/rag-chroma")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
[Chroma](https://python.langchain.com/docs/integrations/vectorstores/chroma) is an open-source vector database.
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
This template will create and add documents to the vector database in `chain.py`.
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/rag-chroma/playground](http://127.0.0.1:8000/rag-chroma/playground)
These documents can be loaded from [many sources](https://python.langchain.com/docs/integrations/document_loaders).
We can access the template from code with:
## LLM
```python
from langserve.client import RemoteRunnable
Be sure that `OPENAI_API_KEY` is set in order to the OpenAI models.
runnable = RemoteRunnable("http://localhost:8000/rag-chroma")
```

@ -7,21 +7,10 @@
"source": [
"## Run Template\n",
"\n",
"As shown in the README, add template and start server:\n",
"In `server.py`, set -\n",
"```\n",
"langchain app add rag-chroma\n",
"langchain serve\n",
"```\n",
"\n",
"We can now look at the endpoints:\n",
"\n",
"http://127.0.0.1:8000/docs#\n",
"\n",
"And specifically at our loaded template:\n",
"\n",
"http://127.0.0.1:8000/docs#/default/invoke_rag_chroma_invoke_post\n",
" \n",
"We can also use remote runnable to call it:"
"add_routes(app, chain_rag_conv, path=\"/rag-chroma\")\n",
"```"
]
},
{
@ -33,7 +22,7 @@
"source": [
"from langserve.client import RemoteRunnable\n",
"\n",
"rag_app = RemoteRunnable(\"http://localhost:8000/rag-chroma\")\n",
"rag_app = RemoteRunnable(\"http://localhost:8001/rag-chroma\")\n",
"rag_app.invoke(\"Where id Harrison work\")"
]
}

@ -1,15 +1,70 @@
# RAG Codellama Fireworks
TODO: Add context from below links
https://blog.fireworks.ai/accelerating-code-completion-with-fireworks-fast-llm-inference-f4e8b5ec534a
https://blog.fireworks.ai/simplifying-code-infilling-with-code-llama-and-fireworks-ai-92c9bb06e29c
# rag-codellama-fireworks
This template performs RAG on a codebase.
It uses codellama-34b hosted by Fireworks' [LLM inference API](https://blog.fireworks.ai/accelerating-code-completion-with-fireworks-fast-llm-inference-f4e8b5ec534a).
## Environment Setup
TODO: Add API keys
Set the `FIREWORKS_API_KEY` environment variable to access the Fireworks models.
You can obtain it from [here](https://app.fireworks.ai/login?callbackURL=https://app.fireworks.ai).
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U "langchain-cli[serve]"
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package rag-codellama-fireworks
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add rag-codellama-fireworks
```
And add the following code to your `server.py` file:
```python
from rag_codellama_fireworks import chain as rag_codellama_fireworks_chain
add_routes(app, rag_codellama_fireworks_chain, path="/rag-codellama-fireworks")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/rag-codellama-fireworks/playground](http://127.0.0.1:8000/rag-codellama-fireworks/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
FIREWORKS_API_KEY
https://python.langchain.com/docs/integrations/llms/fireworks
https://app.fireworks.ai/login?callbackURL=https://app.fireworks.ai
runnable = RemoteRunnable("http://localhost:8000/rag-codellama-fireworks")
```

@ -1,13 +1,70 @@
# Conversational RAG
This template performs [conversational](https://python.langchain.com/docs/expression_language/cookbook/retrieval#conversational-retrieval-chain) [retrieval](https://python.langchain.com/docs/use_cases/question_answering/), which is one of the most popular LLM use-cases.
# rag-conversation
This template is used for [conversational](https://python.langchain.com/docs/expression_language/cookbook/retrieval#conversational-retrieval-chain) [retrieval](https://python.langchain.com/docs/use_cases/question_answering/), which is one of the most popular LLM use-cases.
It passes both a conversation history and retrieved documents into an LLM for synthesis.
## LLM
## Environment Setup
This template uses Pinecone as a vectorstore and requires that `PINECONE_API_KEY`, `PINECONE_ENVIRONMENT`, and `PINECONE_INDEX` are set.
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U "langchain-cli[serve]"
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package rag-conversation
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add rag-conversation
```
And add the following code to your `server.py` file:
```python
from rag_conversation import chain as rag_conversation_chain
add_routes(app, rag_conversation_chain, path="/rag-conversation")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/rag-conversation/playground](http://127.0.0.1:8000/rag-conversation/playground)
Be sure that `OPENAI_API_KEY` is set in order to use the OpenAI models.
We can access the template from code with:
## Pinecone
```python
from langserve.client import RemoteRunnable
This template uses Pinecone as a vectorstore and requires that `PINECONE_API_KEY`, `PINECONE_ENVIRONMENT`, and `PINECONE_INDEX` are set.
runnable = RemoteRunnable("http://localhost:8000/rag-conversation")
```

@ -1,59 +1,89 @@
# Elasticsearch RAG Example
Using Langserve and ElasticSearch to build a RAG search example for answering questions on workplace documents.
# rag-elasticsearch
Relies on sentence transformer `MiniLM-L6-v2` for embedding passages and questions.
This template performs RAG using ElasticSearch.
## Running Elasticsearch
It relies on sentence transformer `MiniLM-L6-v2` for embedding passages and questions.
There are a number of ways to run Elasticsearch.
## Environment Setup
### Elastic Cloud
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
Create a free trial account on [Elastic Cloud](https://cloud.elastic.co/registration?utm_source=langchain&utm_content=langserve).
Once you have created an account, you can create a deployment. With a deployment, you can use these environment variables to connect to your Elasticsearch instance:
To connect to your Elasticsearch instance, use the following environment variables:
```bash
export ELASTIC_CLOUD_ID = <ClOUD_ID>
export ELASTIC_USERNAME = <ClOUD_USERNAME>
export ELASTIC_PASSWORD = <ClOUD_PASSWORD>
```
For local development with Docker, use:
```bash
export ES_URL = "http://localhost:9200"
```
### Docker
## Usage
For local development, you can use Docker:
To use this package, you should first have the LangChain CLI installed:
```bash
docker run -p 9200:9200 \
-e "discovery.type=single-node" \
-e "xpack.security.enabled=false" \
-e "xpack.security.http.ssl.enabled=false" \
-e "xpack.license.self_generated.type=trial" \
docker.elastic.co/elasticsearch/elasticsearch:8.10.0
```shell
pip install -U "langchain-cli[serve]"
```
This will run Elasticsearch on port 9200. You can then check that it is running by visiting [http://localhost:9200](http://localhost:9200).
To create a new LangChain project and install this as the only package, you can do:
With a deployment, you can use these environment variables to connect to your Elasticsearch instance:
```shell
langchain app new my-app --package rag-elasticsearch
```
```bash
export ES_URL = "http://localhost:9200"
If you want to add this to an existing project, you can just run:
```shell
langchain app add rag-elasticsearch
```
## Documents
And add the following code to your `server.py` file:
```python
from rag_elasticsearch import chain as rag_elasticsearch_chain
To load fictional workplace documents, run the following command from the root of this repository:
add_routes(app, rag_elasticsearch_chain, path="/rag-elasticsearch")
```
```bash
python ./data/load_documents.py
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
However, you can choose from a large number of document loaders [here](https://python.langchain.com/docs/integrations/document_loaders).
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/rag-elasticsearch/playground](http://127.0.0.1:8000/rag-elasticsearch/playground)
## Installation
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rag-elasticsearch")
```
For loading the fictional workplace documents, run the following command from the root of this repository:
```bash
# from inside your LangServe instance
poe add rag-elasticsearch
python ./data/load_documents.py
```
However, you can choose from a large number of document loaders [here](https://python.langchain.com/docs/integrations/document_loaders).

@ -15,7 +15,7 @@ jq = "^1.6.0"
tiktoken = "^0.5.1"
[tool.langserve]
export_module = "rag-elasticsearch"
export_module = "rag_elasticsearch"
export_attr = "chain"

@ -1,5 +1,69 @@
# RAG Fusion
Re-implemented from [this GitHub repo](https://github.com/Raudaschl/rag-fusion), all credit to original author
# rag-fusion
> RAG-Fusion, a search methodology that aims to bridge the gap between traditional search paradigms and the multifaceted dimensions of human queries. Inspired by the capabilities of Retrieval Augmented Generation (RAG), this project goes a step further by employing multiple query generation and Reciprocal Rank Fusion to re-rank search results.
This template enables RAG fusion using a re-implementation of the project found [here](https://github.com/Raudaschl/rag-fusion).
It performs multiple query generation and Reciprocal Rank Fusion to re-rank search results.
## Environment Setup
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U "langchain-cli[serve]"
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package rag-fusion
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add rag-fusion
```
And add the following code to your `server.py` file:
```python
from rag_fusion import chain as rag_fusion_chain
add_routes(app, rag_fusion_chain, path="/rag-fusion")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/rag-fusion/playground](http://127.0.0.1:8000/rag-fusion/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rag-fusion")
```

@ -1,5 +1,5 @@
[tool.poetry]
name = "rag_fusion"
name = "rag-fusion"
version = "0.0.1"
description = ""
authors = []

@ -1,17 +1,74 @@
# RAG Mongoß
# rag-mongo
This template performs RAG using MongoDB and OpenAI.
See [this notebook](https://colab.research.google.com/drive/1cr2HBAHyBmwKUerJq2if0JaNhy-hIq7I#scrollTo=TZp7_CBfxTOB) for additional context.
## Environment Setup
The environment variables that need to be set are:
Set the `MONGO_URI` for connecting to MongoDB Atlas Vector Search.
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U "langchain-cli[serve]"
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package rag-mongo
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add rag-mongo
```
And add the following code to your `server.py` file:
```python
from rag_mongo import chain as rag_mongo_chain
add_routes(app, rag_mongo_chain, path="/rag-mongo")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
## Mongo
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/rag-mongo/playground](http://127.0.0.1:8000/rag-mongo/playground)
This template connects to MongoDB Atlas Vector Search.
We can access the template from code with:
Be sure that you have set a few env variables in `chain.py`:
```python
from langserve.client import RemoteRunnable
* `MONGO_URI`
runnable = RemoteRunnable("http://localhost:8000/rag-mongo")
```
## LLM
For additional context, please refer to [this notebook](https://colab.research.google.com/drive/1cr2HBAHyBmwKUerJq2if0JaNhy-hIq7I#scrollTo=TZp7_CBfxTOB).
Be sure that `OPENAI_API_KEY` is set in order to the OpenAI models.

@ -1,51 +1,69 @@
# RAG Pinecone multi query
This template performs RAG using Pinecone and OpenAI with the [multi-query retriever](https://python.langchain.com/docs/modules/data_connection/retrievers/MultiQueryRetriever).
# rag-pinecone-multi-query
This will use an LLM to generate multiple queries from different perspectives for a given user input query.
This template performs RAG using Pinecone and OpenAI with a multi-query retriever.
It uses an LLM to generate multiple queries from different perspectives based on the user's input query.
For each query, it retrieves a set of relevant documents and takes the unique union across all queries for answer synthesis.
## Pinecone
## Environment Setup
This template uses Pinecone as a vectorstore and requires that `PINECONE_API_KEY`, `PINECONE_ENVIRONMENT`, and `PINECONE_INDEX` are set.
This template uses Pinecone as a vectorstore and requires that `PINECONE_API_KEY`, `PINECONE_ENVIRONMENT`, and `PINECONE_INDEX` are set.
## LLM
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
Be sure that `OPENAI_API_KEY` is set in order to the OpenAI models.
## Usage
## App
To use this package, you should first install the LangChain CLI:
Example `server.py`:
```shell
pip install -U "langchain-cli[serve]"
```
from fastapi import FastAPI
from langserve import add_routes
from rag_pinecone_multi_query.chain import chain
app = FastAPI()
To create a new LangChain project and install this package, do:
# Edit this to add the chain you want to add
add_routes(app, chain, path="rag_pinecone_multi_query")
```shell
langchain app new my-app --package rag-pinecone-multi-query
```
if __name__ == "__main__":
import uvicorn
To add this package to an existing project, run:
uvicorn.run(app, host="0.0.0.0", port=8001)
```shell
langchain app add rag-pinecone-multi-query
```
Run:
```
python app/server.py
```
And add the following code to your `server.py` file:
Check endpoint:
```python
from rag_pinecone_multi_query import chain as rag_pinecone_multi_query_chain
add_routes(app, rag_pinecone_multi_query_chain, path="/rag-pinecone-multi-query")
```
http://0.0.0.0:8001/docs
(Optional) Now, let's configure LangSmith. LangSmith will help us trace, monitor, and debug LangChain applications. LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/). If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
See `rag_pinecone_multi_query.ipynb` for example usage -
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server running locally at [http://localhost:8000](http://localhost:8000)
You can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
You can access the playground at [http://127.0.0.1:8000/rag-pinecone-multi-query/playground](http://127.0.0.1:8000/rag-pinecone-multi-query/playground)
To access the template from code, use:
```python
from langserve.client import RemoteRunnable
rag_app_pinecone = RemoteRunnable('http://0.0.0.0:8001/rag_pinecone_multi_query')
rag_app_pinecone.invoke("What are the different types of agent memory")
runnable = RemoteRunnable("http://localhost:8000/rag-pinecone-multi-query")
```

@ -1,23 +1,73 @@
# RAG Pinecone Cohere Re-rank
This template performs RAG using Pinecone and OpenAI, with [Cohere to perform re-ranking](https://python.langchain.com/docs/integrations/retrievers/cohere-reranker) on returned documents.
# rag-pinecone-rerank
[Re-ranking](https://docs.cohere.com/docs/reranking) provides a way to rank retrieved documents using specified filters or criteria.
This template performs RAG using Pinecone and OpenAI along with [Cohere to perform re-ranking](https://txt.cohere.com/rerank/) on returned documents.
## Pinecone
Re-ranking provides a way to rank retrieved documents using specified filters or criteria.
This connects to a hosted Pinecone vectorstore.
## Environment Setup
Be sure that you have set a few env variables in `chain.py`:
This template uses Pinecone as a vectorstore and requires that `PINECONE_API_KEY`, `PINECONE_ENVIRONMENT`, and `PINECONE_INDEX` are set.
* `PINECONE_API_KEY`
* `PINECONE_ENV`
* `index_name`
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
## LLM
Set the `COHERE_API_KEY` environment variable to access the Cohere ReRank.
Be sure that `OPENAI_API_KEY` is set in order to the OpenAI models.
## Usage
## Cohere
To use this package, you should first have the LangChain CLI installed:
Be sure that `COHERE_API_KEY` is set in order to the ReRank endpoint.
```shell
pip install -U "langchain-cli[serve]"
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package rag-pinecone-rerank
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add rag-pinecone-rerank
```
And add the following code to your `server.py` file:
```python
from rag_pinecone_rerank import chain as rag_pinecone_rerank_chain
add_routes(app, rag_pinecone_rerank_chain, path="/rag-pinecone-rerank")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/rag-pinecone-rerank/playground](http://127.0.0.1:8000/rag-pinecone-rerank/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rag-pinecone-rerank")
```

@ -1,17 +1,69 @@
# RAG Pinecone
# rag-pinecone
This template performs RAG using Pinecone and OpenAI.
## Pinecone
## Environment Setup
This template uses Pinecone as a vectorstore and requires that `PINECONE_API_KEY`, `PINECONE_ENVIRONMENT`, and `PINECONE_INDEX` are set.
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U "langchain-cli[serve]"
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package rag-pinecone
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add rag-pinecone
```
And add the following code to your `server.py` file:
```python
from rag_pinecone import chain as rag_pinecone_chain
add_routes(app, rag_pinecone_chain, path="/rag-pinecone")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This connects to a hosted Pinecone vectorstore.
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
Be sure that you have set a few env variables in `chain.py`:
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/rag-pinecone/playground](http://127.0.0.1:8000/rag-pinecone/playground)
* `PINECONE_API_KEY`
* `PINECONE_ENV`
* `index_name`
We can access the template from code with:
## LLM
```python
from langserve.client import RemoteRunnable
Be sure that `OPENAI_API_KEY` is set in order to the OpenAI models.
runnable = RemoteRunnable("http://localhost:8000/rag-pinecone")
```

@ -1,18 +1,15 @@
# Redis RAG Example
Using Langserve and Redis to build a RAG search example for answering questions on financial 10k filings docs (for Nike).
# rag-redis
Relies on the sentence transformer `all-MiniLM-L6-v2` for embedding chunks of the pdf and user questions.
This template performs RAG using Redis and OpenAI on financial 10k filings docs (for Nike).
## Running Redis
It relies on the sentence transformer `all-MiniLM-L6-v2` for embedding chunks of the pdf and user questions.
There are a number of ways to run Redis depending on your use case and scenario.
## Environment Setup
### Easiest? Redis Cloud
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
Create a free database on [Redis Cloud](https://redis.com/try-free). *No credit card information is required*. Simply fill out the info form and select the cloud vendor of your choice and region.
Once you have created an account and database, you can find the connection credentials by clicking on the database and finding the "Connect" button which will provide a few options. Below are the environment variables you need to configure to run this RAG app.
The following Redis environment variables need to be set:
```bash
export REDIS_HOST = <YOUR REDIS HOST>
@ -21,29 +18,6 @@ export REDIS_USER = <YOUR REDIS USER NAME>
export REDIS_PASSWORD = <YOUR REDIS PASSWORD>
```
For larger use cases (greater than 30mb of data), you can certainly created a Fixed or Flexible billing subscription which can scale with your dataset size.
### Redis Stack -- Local Docker
For local development, you can use Docker:
```bash
docker run -p 6397:6397 -p 8001:8001 redis/redis-stack:latest
```
This will run Redis on port 6379. You can then check that it is running by visiting the RedisInsight GUI at [http://localhost:8001](http://localhost:8001).
This is the connection that the application will try to use by default -- local dockerized Redis.
## Data
To load the financial 10k pdf (for Nike) into the vectorstore, run the following command from the root of this repository:
```bash
poetry shell
python ingest.py
```
## Supported Settings
We use a variety of environment variables to configure this application
@ -57,21 +31,61 @@ We use a variety of environment variables to configure this application
| `REDIS_URL` | Full URL for connecting to Redis | `None`, Constructed from user, password, host, and port if not provided |
| `INDEX_NAME` | Name of the vector index | "rag-redis" |
## Usage
To use this package, you should first have the LangChain CLI installed:
## Installation
To create a langserve application using this template, run the following:
```bash
langchain app new my-langserve-app
cd my-langserve-app
```shell
pip install -U "langchain-cli[serve]"
```
Add this template:
```bash
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package rag-redis
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add rag-redis
```
Start the server:
```bash
And add the following code to your `server.py` file:
```python
from rag_redis.chain import chain as rag_redis_chain
add_routes(app, rag_redis_chain, path="/rag-redis")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/rag-redis/playground](http://127.0.0.1:8000/rag-redis/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rag-redis")
```

@ -1,47 +1,75 @@
# Semi structured RAG
# rag-semi-structured
This template performs RAG on semi-structured data (e.g., a PDF with text and tables).
This template performs RAG on semi-structured data, such as a PDF with text and tables.
See this [blog post](https://langchain-blog.ghost.io/ghost/#/editor/post/652dc74e0633850001e977d4) for useful background context.
## Environment Setup
## Data loading
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
We use [partition_pdf](https://unstructured-io.github.io/unstructured/bricks/partition.html#partition-pdf) from Unstructured to extract both table and text elements.
This uses [Unstructured](https://unstructured-io.github.io/unstructured/) for PDF parsing, which requires some system-level package installations.
This will require some system-level package installations, e.g., on Mac:
On Mac, you can install the necessary packages with the following:
```
```shell
brew install tesseract poppler
```
## Chroma
[Chroma](https://python.langchain.com/docs/integrations/vectorstores/chroma) is an open-source vector database.
## Usage
This template will create and add documents to the vector database in `chain.py`.
To use this package, you should first have the LangChain CLI installed:
These documents can be loaded from [many sources](https://python.langchain.com/docs/integrations/document_loaders).
```shell
pip install -U "langchain-cli[serve]"
```
## LLM
To create a new LangChain project and install this as the only package, you can do:
Be sure that `OPENAI_API_KEY` is set in order to the OpenAI models.
```shell
langchain app new my-app --package rag-semi-structured
```
## Adding the template
If you want to add this to an existing project, you can just run:
Create your LangServe app:
```
langchain app new my-app
cd my-app
```shell
langchain app add rag-semi-structured
```
Add template:
```
langchain app add rag-semi-structured
And add the following code to your `server.py` file:
```python
from rag_semi_structured import chain as rag_semi_structured_chain
add_routes(app, rag_semi_structured_chain, path="/rag-semi-structured")
```
Start server:
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
See Jupyter notebook `rag_semi_structured` for various way to connect to the template.
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/rag-semi-structured/playground](http://127.0.0.1:8000/rag-semi-structured/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rag-semi-structured")
```
For more details on how to connect to the template, refer to the Jupyter notebook `rag_semi_structured`.

@ -7,21 +7,10 @@
"source": [
"## Run Template\n",
"\n",
"As shown in the README, add template and start server:\n",
"In `server.py`, set -\n",
"```\n",
"langchain app add rag-semi-structured\n",
"langchain serve\n",
"```\n",
"\n",
"We can now look at the endpoints:\n",
"\n",
"http://127.0.0.1:8000/docs#\n",
"\n",
"And specifically at our loaded template:\n",
"\n",
"http://127.0.0.1:8000/docs#/default/invoke_rag_chroma_private_invoke_post\n",
" \n",
"We can also use remote runnable to call it:"
"add_routes(app, chain_rag_conv, path=\"/rag-semi-structured\")\n",
"```"
]
},
{
@ -33,7 +22,7 @@
"source": [
"from langserve.client import RemoteRunnable\n",
"\n",
"rag_app = RemoteRunnable(\"http://localhost:8000/rag-chroma-private\")\n",
"rag_app = RemoteRunnable(\"http://localhost:8001/rag-semi-structured\")\n",
"rag_app.invoke(\"How does agent memory work?\")"
]
}

@ -1,15 +1,25 @@
# RAG with Supabase
> [Supabase](https://supabase.com/docs) is an open-source Firebase alternative. It is built on top of [PostgreSQL](https://en.wikipedia.org/wiki/PostgreSQL), a free and open-source relational database management system (RDBMS) and uses [pgvector](https://github.com/pgvector/pgvector) to store embeddings within your tables.
# rag_supabase
Use this package to host a retrieval augment generation (RAG) API using LangServe + Supabase.
This template performs RAG with Supabase.
## Install Package
[Supabase](https://supabase.com/docs) is an open-source Firebase alternative. It is built on top of [PostgreSQL](https://en.wikipedia.org/wiki/PostgreSQL), a free and open-source relational database management system (RDBMS) and uses [pgvector](https://github.com/pgvector/pgvector) to store embeddings within your tables.
## Environment Setup
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
To get your `OPENAI_API_KEY`, navigate to [API keys](https://platform.openai.com/account/api-keys) on your OpenAI account and create a new secret key.
To find your `SUPABASE_URL` and `SUPABASE_SERVICE_KEY`, head to your Supabase project's [API settings](https://supabase.com/dashboard/project/_/settings/api).
- `SUPABASE_URL` corresponds to the Project URL
- `SUPABASE_SERVICE_KEY` corresponds to the `service_role` API key
From within your `langservehub` project run:
```shell
poetry run poe add rag-supabase
export SUPABASE_URL=
export SUPABASE_SERVICE_KEY=
export OPENAI_API_KEY=
```
## Setup Supabase Database
@ -61,43 +71,63 @@ Use these steps to setup your Supabase database if you haven't already.
Since we are using [`SupabaseVectorStore`](https://python.langchain.com/docs/integrations/vectorstores/supabase) and [`OpenAIEmbeddings`](https://python.langchain.com/docs/integrations/text_embedding/openai), we need to load their API keys.
Create a `.env` file in the root of your project:
## Usage
_.env_
First, install the LangChain CLI:
```shell
SUPABASE_URL=
SUPABASE_SERVICE_KEY=
OPENAI_API_KEY=
pip install -U "langchain-cli[serve]"
```
To find your `SUPABASE_URL` and `SUPABASE_SERVICE_KEY`, head to your Supabase project's [API settings](https://supabase.com/dashboard/project/_/settings/api).
To create a new LangChain project and install this as the only package, you can do:
- `SUPABASE_URL` corresponds to the Project URL
- `SUPABASE_SERVICE_KEY` corresponds to the `service_role` API key
```shell
langchain app new my-app --package rag_supabase
```
To get your `OPENAI_API_KEY`, navigate to [API keys](https://platform.openai.com/account/api-keys) on your OpenAI account and create a new secret key.
If you want to add this to an existing project, you can just run:
Add this file to your `.gitignore` if it isn't already there (so that we don't commit secrets):
```shell
langchain app add rag_supabase
```
And add the following code to your `server.py` file:
_.gitignore_
```python
from dotenv import load_dotenv
load_dotenv()
```
.env
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
Install [`python-dotenv`](https://github.com/theskumar/python-dotenv) which we will use to load the environment variables into the app:
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
poetry add python-dotenv
langchain serve
```
Finally, call `load_dotenv()` in `server.py`.
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/rag_supabase/playground](http://127.0.0.1:8000/rag_supabase/playground)
_app/server.py_
We can access the template from code with:
```python
from dotenv import load_dotenv
from langserve.client import RemoteRunnable
load_dotenv()
runnable = RemoteRunnable("http://localhost:8000/rag_supabase")
```
TODO: Add details about setting up the Supabase database

@ -1,5 +1,5 @@
[tool.poetry]
name = "rag_supabase"
name = "rag-supabase"
version = "0.1.0"
description = ""
authors = ["Greg Richardson <greg@supabase.io>"]

@ -1,16 +1,71 @@
# RAG Weaviate
This template performs RAG using Weaviate and OpenAI.
# rag-weaviate
## Weaviate
This template performs RAG with Weaviate.
This connects to a hosted Weaviate vectorstore.
## Environment Setup
Be sure that you have set a few env variables in `chain.py`:
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
Also, ensure the following environment variables are set:
* `WEAVIATE_ENVIRONMENT`
* `WEAVIATE_API_KEY`
## LLM
## Usage
Be sure that `OPENAI_API_KEY` is set in order to use the OpenAI models.
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U "langchain-cli[serve]"
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package rag-weaviate
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add rag-weaviate
```
And add the following code to your `server.py` file:
```python
from rag_weaviate import chain as rag_weaviate_chain
add_routes(app, rag_weaviate_chain, path="/rag-weaviate")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/rag-weaviate/playground](http://127.0.0.1:8000/rag-weaviate/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rag-weaviate")
```

@ -1,5 +1,5 @@
[tool.poetry]
name = "rag_weaviate"
name = "rag-weaviate"
version = "0.1.0"
description = ""
authors = ["Erika Cardenas <erika@weaviate.io"]

@ -1,7 +1,66 @@
# Rewrite-Retrieve-Read
**Rewrite-Retrieve-Read** is a method proposed in the paper [Query Rewriting for Retrieval-Augmented Large Language Models](https://arxiv.org/pdf/2305.14283.pdf)
# rewrite_retrieve_read
> Because the original query can not be always optimal to retrieve for the LLM, especially in the real world... we first prompt an LLM to rewrite the queries, then conduct retrieval-augmented reading
This template implemenets a method for query transformation (re-writing) in the paper [Query Rewriting for Retrieval-Augmented Large Language Models](https://arxiv.org/pdf/2305.14283.pdf) to optimize for RAG.
We show how you can easily do that with LangChain Expression Language
## Environment Setup
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U "langchain-cli[serve]"
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package rewrite_retrieve_read
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add rewrite_retrieve_read
```
And add the following code to your `server.py` file:
```python
from rewrite_retrieve_read.chain import chain as rewrite_retrieve_read_chain
add_routes(app, rewrite_retrieve_read_chain, path="/rewrite-retrieve-read")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/rewrite_retrieve_read/playground](http://127.0.0.1:8000/rewrite_retrieve_read/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rewrite_retrieve_read")
```

@ -1,5 +1,5 @@
[tool.poetry]
name = "rewrite_retrieve_read"
name = "rewrite-retrieve-read"
version = "0.0.1"
description = ""
authors = []

@ -1,15 +1,28 @@
# Self-querying with Supabase
> [Supabase](https://supabase.com/docs) is an open-source Firebase alternative. It is built on top of [PostgreSQL](https://en.wikipedia.org/wiki/PostgreSQL), a free and open-source relational database management system (RDBMS) and uses [pgvector](https://github.com/pgvector/pgvector) to store embeddings within your tables.
# self-query-supabase
Use this package to host a LangServe API that can [self-query](https://python.langchain.com/docs/modules/data_connection/retrievers/self_query) Supabase. You'll be able to use natural language to generate a structured query against the database.
This templates allows natural language structured quering of Supabase.
## Install Package
[Supabase](https://supabase.com/docs) is an open-source alternative to Firebase, built on top of [PostgreSQL](https://en.wikipedia.org/wiki/PostgreSQL).
It uses [pgvector](https://github.com/pgvector/pgvector) to store embeddings within your tables.
## Environment Setup
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
To get your `OPENAI_API_KEY`, navigate to [API keys](https://platform.openai.com/account/api-keys) on your OpenAI account and create a new secret key.
To find your `SUPABASE_URL` and `SUPABASE_SERVICE_KEY`, head to your Supabase project's [API settings](https://supabase.com/dashboard/project/_/settings/api).
- `SUPABASE_URL` corresponds to the Project URL
- `SUPABASE_SERVICE_KEY` corresponds to the `service_role` API key
From within your `langservehub` project run:
```shell
poetry run poe add self-query-supabase
export SUPABASE_URL=
export SUPABASE_SERVICE_KEY=
export OPENAI_API_KEY=
```
## Setup Supabase Database
@ -57,47 +70,59 @@ Use these steps to setup your Supabase database if you haven't already.
$$;
```
## Setup Environment Variables
## Usage
Since we are using [`SupabaseVectorStore`](https://python.langchain.com/docs/integrations/vectorstores/supabase) and [`OpenAIEmbeddings`](https://python.langchain.com/docs/integrations/text_embedding/openai), we need to load their API keys.
To use this package, install the LangChain CLI first:
Create a `.env` file in the root of your project:
```shell
pip install -U "langchain-cli[serve]"
```
_.env_
Create a new LangChain project and install this package as the only one:
```shell
SUPABASE_URL=
SUPABASE_SERVICE_KEY=
OPENAI_API_KEY=
langchain app new my-app --package self-query-supabase
```
To find your `SUPABASE_URL` and `SUPABASE_SERVICE_KEY`, head to your Supabase project's [API settings](https://supabase.com/dashboard/project/_/settings/api).
To add this to an existing project, run:
- `SUPABASE_URL` corresponds to the Project URL
- `SUPABASE_SERVICE_KEY` corresponds to the `service_role` API key
```shell
langchain app add self-query-supabase
```
To get your `OPENAI_API_KEY`, navigate to [API keys](https://platform.openai.com/account/api-keys) on your OpenAI account and create a new secret key.
Add the following code to your `server.py` file:
```python
from self_query_supabase import chain as self_query_supabase_chain
Add this file to your `.gitignore` if it isn't already there (so that we don't commit secrets):
add_routes(app, self_query_supabase_chain, path="/self-query-supabase")
```
_.gitignore_
(Optional) If you have access to LangSmith, configure it to help trace, monitor and debug LangChain applications. If you don't have access, skip this section.
```
.env
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
Install [`python-dotenv`](https://github.com/theskumar/python-dotenv) which we will use to load the environment variables into the app:
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
poetry add python-dotenv
langchain serve
```
Finally, call `load_dotenv()` in `server.py`.
This will start the FastAPI app with a server running locally at
[http://localhost:8000](http://localhost:8000)
_app/server.py_
You can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
Access the playground at [http://127.0.0.1:8000/self-query-supabase/playground](http://127.0.0.1:8000/self-query-supabase/playground)
Access the template from code with:
```python
from dotenv import load_dotenv
from langserve.client import RemoteRunnable
load_dotenv()
runnable = RemoteRunnable("http://localhost:8000/self-query-supabase")
```
TODO: Instructions to set up the Supabase database and install the package.

@ -1,5 +1,5 @@
[tool.poetry]
name = "self_query_supabase"
name = "self-query-supabase"
version = "0.1.0"
description = ""
authors = ["Greg Richardson <greg@supabase.io>"]

@ -1,21 +1,73 @@
# SQL with LLaMA2
This template allows you to chat with a SQL database in natural language using LLaMA2.
# sql-llama2
It is configured to use [Replicate](https://python.langchain.com/docs/integrations/llms/replicate).
This template enables a user to interact with a SQL database using natural language.
But, it can be adapted to any API that support LLaMA2, including [Fireworks](https://python.langchain.com/docs/integrations/chat/fireworks) and others.
It uses LLamA2-13b hosted by [Replicate](https://python.langchain.com/docs/integrations/llms/replicate), but can be adapted to any API that supports LLaMA2 including [Fireworks](https://python.langchain.com/docs/integrations/chat/fireworks).
See related templates `sql-ollama` and `sql-llamacpp` for private, local chat with SQL.
The template includes an example database of 2023 NBA rosters.
## Set up SQL DB
For more information on how to build this database, see [here](https://github.com/facebookresearch/llama-recipes/blob/main/demo_apps/StructuredLlama.ipynb).
This template includes an example DB of 2023 NBA rosters.
## Environment Setup
You can see instructions to build this DB [here](https://github.com/facebookresearch/llama-recipes/blob/main/demo_apps/StructuredLlama.ipynb).
Ensure the `REPLICATE_API_TOKEN` is set in your environment.
## LLM
## Usage
This template will use a `Replicate` [hosted version](https://replicate.com/meta/llama-2-13b-chat/versions/f4e2de70d66816a838a89eeeb621910adffb0dd0baba3976c96980970978018d) of LLaMA2.
To use this package, you should first have the LangChain CLI installed:
Be sure that `REPLICATE_API_TOKEN` is set in your environment.
```shell
pip install -U "langchain-cli[serve]"
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package sql-llama2
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add sql-llama2
```
And add the following code to your `server.py` file:
```python
from sql_llama2 import chain as sql_llama2_chain
add_routes(app, sql_llama2_chain, path="/sql-llama2")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/sql-llama2/playground](http://127.0.0.1:8000/sql-llama2/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/sql-llama2")
```

@ -1,14 +1,15 @@
# SQL with LLaMA2 using Ollama
This template allows you to chat with a SQL database in natural language in private, using an open source LLM.
# sql-llamacpp
## LLama.cpp
### Enviorment
This template enables a user to interact with a SQL database using natural language.
From [here](https://python.langchain.com/docs/guides/local_llms) and [here](https://github.com/abetlen/llama-cpp-python/blob/main/docs/install/macos.md).
It uses [Mistral-7b](https://mistral.ai/news/announcing-mistral-7b/) via [llama.cpp](https://github.com/ggerganov/llama.cpp) to run inference locally on a Mac laptop.
```
## Environment Setup
To set up the environment, use the following steps:
```shell
wget https://github.com/conda-forge/miniforge/releases/latest/download/Miniforge3-MacOSX-arm64.sh
bash Miniforge3-MacOSX-arm64.sh
conda create -n llama python=3.9.16
@ -16,14 +17,61 @@ conda activate /Users/rlm/miniforge3/envs/llama
CMAKE_ARGS="-DLLAMA_METAL=on" FORCE_CMAKE=1 pip install -U llama-cpp-python --no-cache-dir
```
### LLM
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U "langchain-cli[serve]"
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package sql-llamacpp
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add sql-llamacpp
```
And add the following code to your `server.py` file:
```python
from sql_llamacpp import chain as sql_llamacpp_chain
add_routes(app, sql_llamacpp_chain, path="/sql-llamacpp")
```
The package will download the Mistral-7b model from [here](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GGUF). You can select other files and specify their download path (browse [here](https://huggingface.co/TheBloke)).
This package includes an example DB of 2023 NBA rosters. You can see instructions to build this DB [here](https://github.com/facebookresearch/llama-recipes/blob/main/demo_apps/StructuredLlama.ipynb).
(Optional) Configure LangSmith for tracing, monitoring and debugging LangChain applications. LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/). If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
It will download Mistral-7b model from [here](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GGUF).
```shell
langchain serve
```
This will start the FastAPI app with a server running locally at
[http://localhost:8000](http://localhost:8000)
You can select other files and specify their download path (browse [here](https://huggingface.co/TheBloke)).
You can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
You can access the playground at [http://127.0.0.1:8000/sql-llamacpp/playground](http://127.0.0.1:8000/sql-llamacpp/playground)
## Set up SQL DB
You can access the template from code with:
This template includes an example DB of 2023 NBA rosters.
```python
from langserve.client import RemoteRunnable
You can see instructions to build this DB [here](https://github.com/facebookresearch/llama-recipes/blob/main/demo_apps/StructuredLlama.ipynb).
runnable = RemoteRunnable("http://localhost:8000/sql-llamacpp")
```

@ -1,18 +1,78 @@
# SQL with LLaMA2 using llama.cpp
# sql-ollama
This template allows you to chat with a SQL database in natural language in private, using an open source LLM.
This template enables a user to interact with a SQL database using natural language.
## Set up Ollama
It uses [Zephyr-7b](https://huggingface.co/HuggingFaceH4/zephyr-7b-alpha) via [Ollama](https://ollama.ai/library/zephyr) to run inference locally on a Mac laptop.
Follow instructions [here](https://python.langchain.com/docs/integrations/chat/ollama) to download Ollama.
## Environment Setup
Also follow instructions to download your LLM of interest:
Before using this template, you need to set up Ollama and SQL database.
* This template uses `llama2:13b-chat`
* But you can pick from many LLMs [here](https://ollama.ai/library)
1. Follow instructions [here](https://python.langchain.com/docs/integrations/chat/ollama) to download Ollama.
## Set up SQL DB
2. Download your LLM of interest:
This template includes an example DB of 2023 NBA rosters.
* This package uses `zephyr`: `ollama pull zephyr`
* You can choose from many LLMs [here](https://ollama.ai/library)
You can see instructions to build this DB [here](https://github.com/facebookresearch/llama-recipes/blob/main/demo_apps/StructuredLlama.ipynb).
3. This package includes an example DB of 2023 NBA rosters. You can see instructions to build this DB [here](https://github.com/facebookresearch/llama-recipes/blob/main/demo_apps/StructuredLlama.ipynb).
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U "langchain-cli[serve]"
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package sql-ollama
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add sql-ollama
```
And add the following code to your `server.py` file:
```python
from sql_ollama import chain as sql_ollama_chain
add_routes(app, sql_ollama_chain, path="/sql-ollama")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/sql-ollama/playground](http://127.0.0.1:8000/sql-ollama/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/sql-ollama")
```

@ -9,7 +9,7 @@ from langchain.schema.runnable import RunnableLambda, RunnablePassthrough
from langchain.utilities import SQLDatabase
# Add the LLM downloaded from Ollama
ollama_llm = "llama2:13b-chat"
ollama_llm = "zephyr"
llm = ChatOllama(model=ollama_llm)

@ -1,9 +1,71 @@
# Step-Back Prompting (Question-Answering)
# stepback-qa-prompting
One prompting technique called "Step-Back" prompting can improve performance on complex questions by first asking a "step back" question. This can be combined with regular question-answering applications by then doing retrieval on both the original and step-back question.
This template replicates the "Step-Back" prompting technique that improves performance on complex questions by first asking a "step back" question.
Read the paper [here](https://arxiv.org/abs/2310.06117)
This technique can be combined with regular question-answering applications by doing retrieval on both the original and step-back question.
See an excelent blog post on this by Cobus Greyling [here](https://cobusgreyling.medium.com/a-new-prompt-engineering-technique-has-been-introduced-called-step-back-prompting-b00e8954cacb)
Read more about this in the paper [here](https://arxiv.org/abs/2310.06117) and an excellent blog post by Cobus Greyling [here](https://cobusgreyling.medium.com/a-new-prompt-engineering-technique-has-been-introduced-called-step-back-prompting-b00e8954cacb)
In this template we will replicate this technique. We modify the prompts used slightly to work better with chat models.
We will modify the prompts slightly to work better with chat models in this template.
## Environment Setup
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U "langchain-cli[serve]"
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package stepback-qa-prompting
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add stepback-qa-prompting
```
And add the following code to your `server.py` file:
```python
from stepback_qa_prompting import chain as stepback_qa_prompting_chain
add_routes(app, stepback_qa_prompting_chain, path="/stepback-qa-prompting")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/stepback-qa-prompting/playground](http://127.0.0.1:8000/stepback-qa-prompting/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/stepback-qa-prompting")
```

@ -1,5 +1,5 @@
[tool.poetry]
name = "stepback_qa_prompting"
name = "stepback-qa-prompting"
version = "0.0.1"
description = ""
authors = []

@ -1,16 +1,70 @@
# Summarize documents with Anthropic
This template uses Anthropic's `Claude2` to summarize documents.
# summarize-anthropic
To do this, we can use various prompts from LangChain hub, such as:
This template uses Anthropic's `Claude2` to summarize long documents.
* [This fun summarization prompt](https://smith.langchain.com/hub/hwchase17/anthropic-paper-qa)
* [Chain of density summarization prompt](https://smith.langchain.com/hub/lawwu/chain_of_density)
It leverages a large context window of 100k tokens, allowing for summarization of documents over 100 pages.
`Claude2` has a large (100k token) context window, allowing us to summarize documents over 100 pages.
You can see the summarization prompt in `chain.py`.
## LLM
## Environment Setup
This template will use `Claude2` by default.
Set the `ANTHROPIC_API_KEY` environment variable to access the Anthropic models.
Be sure that `ANTHROPIC_API_KEY` is set in your enviorment.
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U "langchain-cli[serve]"
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package summarize-anthropic
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add summarize-anthropic
```
And add the following code to your `server.py` file:
```python
from summarize_anthropic import chain as summarize_anthropic_chain
add_routes(app, summarize_anthropic_chain, path="/summarize-anthropic")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/summarize-anthropic/playground](http://127.0.0.1:8000/summarize-anthropic/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/summarize-anthropic")
```

@ -12,25 +12,10 @@
},
{
"cell_type": "code",
"execution_count": 10,
"id": "a65a2603",
"execution_count": null,
"id": "f4162356-c370-43d7-b34a-4e6af7a1e4c9",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Collecting pdf2image\n",
" Using cached pdf2image-1.16.3-py3-none-any.whl (11 kB)\n",
"Requirement already satisfied: pillow in /Users/rlm/miniforge3/envs/llama/lib/python3.9/site-packages (from pdf2image) (8.4.0)\n",
"Installing collected packages: pdf2image\n",
"Successfully installed pdf2image-1.16.3\n",
"\n",
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m A new release of pip is available: \u001b[0m\u001b[31;49m23.1.2\u001b[0m\u001b[39;49m -> \u001b[0m\u001b[32;49m23.3.1\u001b[0m\n",
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m To update, run: \u001b[0m\u001b[32;49mpip install --upgrade pip\u001b[0m\n"
]
}
],
"outputs": [],
"source": [
"! pip install pdf2image"
]
@ -40,7 +25,7 @@
"id": "6ff363da",
"metadata": {},
"source": [
"Academic papers."
"Load academic papers -"
]
},
{
@ -66,7 +51,7 @@
"id": "db964a34",
"metadata": {},
"source": [
"Also try loading blog posts."
"Or try loading blog posts -"
]
},
{
@ -87,51 +72,12 @@
"id": "361fcf5c",
"metadata": {},
"source": [
"## Connect to template\n",
"\n",
"`Context`\n",
" \n",
"* LangServe apps gives you access to templates.\n",
"* Templates LLM pipeline (runnables or chains) end-points accessible via FastAPI.\n",
"* The environment for these templates is managed by Poetry.\n",
"\n",
"`Create app`\n",
"\n",
"* Install LangServe and create an app.\n",
"* This will create a new Poetry environment /\n",
"```\n",
"pip install < to add > \n",
"langchain app new my-app\n",
"cd my-app\n",
"```\n",
"\n",
"`Add templates`\n",
"## Run template\n",
"\n",
"* When we add a template, we update the Poetry config file with the necessary dependencies.\n",
"* It also automatically installed these template dependencies in your Poetry environment\n",
"In `server.py`, set -\n",
"```\n",
"langchain app add summarize-anthropic\n",
"```\n",
"\n",
"`Start FastAPI server`\n",
"\n",
"```\n",
"langchain serve\n",
"```\n",
"\n",
"Note, we can now look at the endpoints:\n",
"\n",
"http://127.0.0.1:8000/docs#\n",
"\n",
"And look specifically at our loaded template:\n",
"\n",
"http://127.0.0.1:8000/docs#/default/invoke_summarize_anthropic_invoke_post\n",
" \n",
"We can also use remote runnable to call it.\n",
"\n",
"## Summarization\n",
"\n",
"We will use [this](https://smith.langchain.com/hub/hwchase17/anthropic-paper-qa) prompt."
"add_routes(app, chain_rag_conv, path=\"/summarize-anthropic\")\n",
"```"
]
},
{

@ -1,17 +1,69 @@
# XML Agent
This template creates an agent that uses XML syntax to communicate its decisions of what actions to take.
For this example, we use Anthropic since Anthropic's Claude models are particularly good at writing XML syntax.
This example creates an agent that can optionally look up things on the internet using You.com's retriever.
# xml-agent
## LLM
This package creates an agent that uses XML syntax to communicate its decisions of what actions to take. It uses Anthropic's Claude models for writing XML syntax and can optionally look up things on the internet using DuckDuckGo.
This template will use `Anthropic` by default.
## Environment Setup
Be sure that `ANTHROPIC_API_KEY` is set in your environment.
Two environment variables need to be set:
## Tools
- `ANTHROPIC_API_KEY`: Required for using Anthropic
This template will use `You.com` by default.
## Usage
Be sure that `YDC_API_KEY` is set in your environment.
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U "langchain-cli[serve]"
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package xml-agent
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add xml-agent
```
And add the following code to your `server.py` file:
```python
from xml_agent import chain as xml_agent_chain
add_routes(app, xml_agent_chain, path="/xml-agent")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/xml-agent/playground](http://127.0.0.1:8000/xml-agent/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/xml-agent")
```

Loading…
Cancel
Save