Compare commits

..

1 Commits

Author SHA1 Message Date
Harrison Chase
844151605c WIP logging to disk 2022-11-27 15:20:44 -08:00
683 changed files with 4260 additions and 68673 deletions

View File

@ -1,144 +0,0 @@
.vscode/
.idea/
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class
# C extensions
*.so
# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
pip-wheel-metadata/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST
# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
*.py,cover
.hypothesis/
.pytest_cache/
# Translations
*.mo
*.pot
# Django stuff:
*.log
local_settings.py
db.sqlite3
db.sqlite3-journal
# Flask stuff:
instance/
.webassets-cache
# Scrapy stuff:
.scrapy
# Sphinx documentation
docs/_build/
# PyBuilder
target/
# Jupyter Notebook
.ipynb_checkpoints
notebooks/
# IPython
profile_default/
ipython_config.py
# pyenv
.python-version
# pipenv
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
# However, in case of collaboration, if having platform-specific dependencies or dependencies
# having no cross-platform support, pipenv may install dependencies that don't work, or not
# install all needed dependencies.
#Pipfile.lock
# PEP 582; used by e.g. github.com/David-OConnor/pyflow
__pypackages__/
# Celery stuff
celerybeat-schedule
celerybeat.pid
# SageMath parsed files
*.sage.py
# Environments
.env
.venv
.venvs
env/
venv/
ENV/
env.bak/
venv.bak/
# Spyder project settings
.spyderproject
.spyproject
# Rope project settings
.ropeproject
# mkdocs documentation
/site
# mypy
.mypy_cache/
.dmypy.json
dmypy.json
# Pyre type checker
.pyre/
# macOS display setting files
.DS_Store
# docker
docker/
!docker/assets/
.dockerignore
docker.build

View File

@ -1,6 +1,5 @@
[flake8]
exclude =
venv
.venv
__pycache__
notebooks

View File

@ -1,36 +0,0 @@
name: linkcheck
on:
push:
branches: [master]
pull_request:
env:
POETRY_VERSION: "1.3.1"
jobs:
build:
runs-on: ubuntu-latest
strategy:
matrix:
python-version:
- "3.11"
steps:
- uses: actions/checkout@v3
- name: Install poetry
run: |
pipx install poetry==$POETRY_VERSION
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
cache: poetry
- name: Install dependencies
run: |
poetry install --with docs
- name: Build the docs
run: |
make docs_build
- name: Analyzing the docs with linkcheck
run: |
make docs_linkcheck

View File

@ -1,36 +1,23 @@
name: lint
on:
push:
branches: [master]
pull_request:
env:
POETRY_VERSION: "1.3.1"
on: [push, pull_request]
jobs:
build:
runs-on: ubuntu-latest
strategy:
matrix:
python-version:
- "3.8"
- "3.9"
- "3.10"
- "3.11"
python-version: ["3.7"]
steps:
- uses: actions/checkout@v3
- name: Install poetry
run: |
pipx install poetry==$POETRY_VERSION
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
cache: poetry
- name: Install dependencies
run: |
poetry install
- name: Analysing the code with our lint
run: |
make lint
- uses: actions/checkout@v3
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v3
with:
python-version: ${{ matrix.python-version }}
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -r test_requirements.txt
- name: Analysing the code with our lint
run: |
make lint

View File

@ -1,49 +0,0 @@
name: release
on:
pull_request:
types:
- closed
branches:
- master
paths:
- 'pyproject.toml'
env:
POETRY_VERSION: "1.3.1"
jobs:
if_release:
if: |
${{ github.event.pull_request.merged == true }}
&& ${{ contains(github.event.pull_request.labels.*.name, 'release') }}
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Install poetry
run: pipx install poetry==$POETRY_VERSION
- name: Set up Python 3.10
uses: actions/setup-python@v4
with:
python-version: "3.10"
cache: "poetry"
- name: Build project for distribution
run: poetry build
- name: Check Version
id: check-version
run: |
echo version=$(poetry version --short) >> $GITHUB_OUTPUT
- name: Create Release
uses: ncipollo/release-action@v1
with:
artifacts: "dist/*"
token: ${{ secrets.GITHUB_TOKEN }}
draft: false
generateReleaseNotes: true
tag: v${{ steps.check-version.outputs.version }}
commit: master
- name: Publish to PyPI
env:
POETRY_PYPI_TOKEN_PYPI: ${{ secrets.PYPI_API_TOKEN }}
run: |
poetry publish

View File

@ -1,34 +1,23 @@
name: test
on:
push:
branches: [master]
pull_request:
env:
POETRY_VERSION: "1.3.1"
on: [push, pull_request]
jobs:
build:
runs-on: ubuntu-latest
strategy:
matrix:
python-version:
- "3.8"
- "3.9"
- "3.10"
- "3.11"
python-version: ["3.7"]
steps:
- uses: actions/checkout@v3
- name: Install poetry
run: pipx install poetry==$POETRY_VERSION
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
cache: "poetry"
- name: Install dependencies
run: poetry install
- name: Run unit tests
run: |
make test
- uses: actions/checkout@v3
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v3
with:
python-version: ${{ matrix.python-version }}
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -r test_requirements.txt
- name: Run unit tests
run: |
make tests

7
.gitignore vendored
View File

@ -1,5 +1,4 @@
.vscode/
.idea/
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
@ -106,9 +105,7 @@ celerybeat.pid
# Environments
.env
!docker/.env
.venv
.venvs
env/
venv/
ENV/
@ -132,7 +129,3 @@ dmypy.json
# Pyre type checker
.pyre/
# macOS display setting files
.DS_Store
docker.build

View File

@ -1,8 +0,0 @@
cff-version: 1.2.0
message: "If you use this software, please cite it as below."
authors:
- family-names: "Chase"
given-names: "Harrison"
title: "LangChain"
date-released: 2022-10-17
url: "https://github.com/hwchase17/langchain"

View File

@ -1,186 +0,0 @@
# Contributing to LangChain
Hi there! Thank you for even being interested in contributing to LangChain.
As an open source project in a rapidly developing field, we are extremely open
to contributions, whether it be in the form of a new feature, improved infra, or better documentation.
To contribute to this project, please follow a ["fork and pull request"](https://docs.github.com/en/get-started/quickstart/contributing-to-projects) workflow.
Please do not try to push directly to this repo unless you are maintainer.
## 🗺Contributing Guidelines
### 🚩GitHub Issues
Our [issues](https://github.com/hwchase17/langchain/issues) page is kept up to date
with bugs, improvements, and feature requests. There is a taxonomy of labels to help
with sorting and discovery of issues of interest. These include:
- prompts: related to prompt tooling/infra.
- llms: related to LLM wrappers/tooling/infra.
- chains
- utilities: related to different types of utilities to integrate with (Python, SQL, etc.).
- agents
- memory
- applications: related to example applications to build
If you start working on an issue, please assign it to yourself.
If you are adding an issue, please try to keep it focused on a single modular bug/improvement/feature.
If the two issues are related, or blocking, please link them rather than keep them as one single one.
We will try to keep these issues as up to date as possible, though
with the rapid rate of develop in this field some may get out of date.
If you notice this happening, please just let us know.
### 🙋Getting Help
Although we try to have a developer setup to make it as easy as possible for others to contribute (see below)
it is possible that some pain point may arise around environment setup, linting, documentation, or other.
Should that occur, please contact a maintainer! Not only do we want to help get you unblocked,
but we also want to make sure that the process is smooth for future contributors.
In a similar vein, we do enforce certain linting, formatting, and documentation standards in the codebase.
If you are finding these difficult (or even just annoying) to work with,
feel free to contact a maintainer for help - we do not want these to get in the way of getting
good code into the codebase.
### 🏭Release process
As of now, LangChain has an ad hoc release process: releases are cut with high frequency via by
a developer and published to [PyPI](https://pypi.org/project/langchain/).
LangChain follows the [semver](https://semver.org/) versioning standard. However, as pre-1.0 software,
even patch releases may contain [non-backwards-compatible changes](https://semver.org/#spec-item-4).
If your contribution has made its way into a release, we will want to give you credit on Twitter (only if you want though)!
If you have a Twitter account you would like us to mention, please let us know in the PR or in another manner.
## 🚀Quick Start
This project uses [Poetry](https://python-poetry.org/) as a dependency manager. Check out Poetry's [documentation on how to install it](https://python-poetry.org/docs/#installation) on your system before proceeding.
❗Note: If you use `Conda` or `Pyenv` as your environment / package manager, avoid dependency conflicts by doing the following first:
1. *Before installing Poetry*, create and activate a new Conda env (e.g. `conda create -n langchain python=3.9`)
2. Install Poetry (see above)
3. Tell Poetry to use the virtualenv python environment (`poetry config virtualenvs.prefer-active-python true`)
4. Continue with the following steps.
To install requirements:
```bash
poetry install -E all
```
This will install all requirements for running the package, examples, linting, formatting, tests, and coverage. Note the `-E all` flag will install all optional dependencies necessary for integration testing.
Now, you should be able to run the common tasks in the following section.
## ✅Common Tasks
Type `make` for a list of common tasks.
### Code Formatting
Formatting for this project is done via a combination of [Black](https://black.readthedocs.io/en/stable/) and [isort](https://pycqa.github.io/isort/).
To run formatting for this project:
```bash
make format
```
### Linting
Linting for this project is done via a combination of [Black](https://black.readthedocs.io/en/stable/), [isort](https://pycqa.github.io/isort/), [flake8](https://flake8.pycqa.org/en/latest/), and [mypy](http://mypy-lang.org/).
To run linting for this project:
```bash
make lint
```
We recognize linting can be annoying - if you do not want to do it, please contact a project maintainer, and they can help you with it. We do not want this to be a blocker for good code getting contributed.
### Coverage
Code coverage (i.e. the amount of code that is covered by unit tests) helps identify areas of the code that are potentially more or less brittle.
To get a report of current coverage, run the following:
```bash
make coverage
```
### Testing
Unit tests cover modular logic that does not require calls to outside APIs.
To run unit tests:
```bash
make test
```
If you add new logic, please add a unit test.
Integration tests cover logic that requires making calls to outside APIs (often integration with other services).
To run integration tests:
```bash
make integration_tests
```
If you add support for a new external API, please add a new integration test.
### Adding a Jupyter Notebook
If you are adding a Jupyter notebook example, you'll want to install the optional `dev` dependencies.
To install dev dependencies:
```bash
poetry install --with dev
```
Launch a notebook:
```bash
poetry run jupyter notebook
```
When you run `poetry install`, the `langchain` package is installed as editable in the virtualenv, so your new logic can be imported into the notebook.
## Using Docker
Refer to [DOCKER.md](docker/DOCKER.md) for more information.
## Documentation
### Contribute Documentation
Docs are largely autogenerated by [sphinx](https://www.sphinx-doc.org/en/master/) from the code.
For that reason, we ask that you add good documentation to all classes and methods.
Similar to linting, we recognize documentation can be annoying. If you do not want to do it, please contact a project maintainer, and they can help you with it. We do not want this to be a blocker for good code getting contributed.
### Build Documentation Locally
Before building the documentation, it is always a good idea to clean the build directory:
```bash
make docs_clean
```
Next, you can run the linkchecker to make sure all links are valid:
```bash
make docs_linkcheck
```
Finally, you can build the documentation as outlined below:
```bash
make docs_build
```

3
MANIFEST.in Normal file
View File

@ -0,0 +1,3 @@
include langchain/py.typed
include langchain/VERSION
include LICENSE

View File

@ -1,73 +1,17 @@
.PHONY: all clean format lint test tests test_watch integration_tests help
GIT_HASH ?= $(shell git rev-parse --short HEAD)
LANGCHAIN_VERSION := $(shell grep '^version' pyproject.toml | cut -d '=' -f2 | tr -d '"')
all: help
coverage:
poetry run pytest --cov \
--cov-config=.coveragerc \
--cov-report xml \
--cov-report term-missing:skip-covered
clean: docs_clean
docs_build:
cd docs && poetry run make html
docs_clean:
cd docs && poetry run make clean
docs_linkcheck:
poetry run linkchecker docs/_build/html/index.html
.PHONY: format lint tests integration_tests
format:
poetry run black .
poetry run ruff --select I --fix .
black .
isort .
lint:
poetry run mypy .
poetry run black . --check
poetry run ruff .
mypy .
black . --check
isort . --check
flake8 .
test:
poetry run pytest tests/unit_tests
tests: test
test_watch:
poetry run ptw --now . -- tests/unit_tests
tests:
pytest tests/unit_tests
integration_tests:
poetry run pytest tests/integration_tests
help:
@echo '----'
@echo 'coverage - run unit tests and generate coverage report'
@echo 'docs_build - build the documentation'
@echo 'docs_clean - clean the documentation build artifacts'
@echo 'docs_linkcheck - run linkchecker on the documentation'
ifneq ($(shell command -v docker 2> /dev/null),)
@echo 'docker - build and run the docker dev image'
@echo 'docker.run - run the docker dev image'
@echo 'docker.jupyter - start a jupyter notebook inside container'
@echo 'docker.build - build the docker dev image'
@echo 'docker.force_build - force a rebuild'
@echo 'docker.test - run the unit tests in docker'
@echo 'docker.lint - run the linters in docker'
@echo 'docker.clean - remove the docker dev image'
endif
@echo 'format - run code formatters'
@echo 'lint - run linters'
@echo 'test - run unit tests'
@echo 'test_watch - run unit tests in watch mode'
@echo 'integration_tests - run integration tests'
# include the following makefile if the docker executable is available
ifeq ($(shell command -v docker 2> /dev/null),)
$(info Docker not found, skipping docker-related targets)
else
include docker/Makefile
endif
pytest tests/integration_tests

133
README.md
View File

@ -1,15 +1,8 @@
# 🦜️🔗 LangChain - Docker
# 🦜️🔗 LangChain
WIP: This is a fork of langchain focused on implementing a docker warpper and
toolchain. The goal is to make it easy to use LLM chains running inside a
container, build custom docker based tools and let agents run arbitrary
untrusted code inside.
⚡ Building applications with LLMs through composability ⚡
Currently exploring the following:
- Docker wrapper for LLMs and chains
- Creating a toolchain for building docker based LLM tools.
- Building agents that can run arbitrary untrusted code inside a container.
[![lint](https://github.com/hwchase17/langchain/actions/workflows/lint.yml/badge.svg)](https://github.com/hwchase17/langchain/actions/workflows/lint.yml) [![test](https://github.com/hwchase17/langchain/actions/workflows/test.yml/badge.svg)](https://github.com/hwchase17/langchain/actions/workflows/test.yml) [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) [![Twitter](https://img.shields.io/twitter/url/https/twitter.com/langchainai.svg?style=social&label=Follow%20%40LangChainAI)](https://twitter.com/langchainai) [![](https://dcbadge.vercel.app/api/server/6adMQxSpJS?compact=true&style=flat)](https://discord.gg/6adMQxSpJS)
## Quick Install
@ -20,67 +13,113 @@ Currently exploring the following:
Large language models (LLMs) are emerging as a transformative technology, enabling
developers to build applications that they previously could not.
But using these LLMs in isolation is often not enough to
create a truly powerful app - the real power comes when you can combine them with other sources of computation or knowledge.
create a truly powerful app - the real power comes when you are able to
combine them with other sources of computation or knowledge.
This library is aimed at assisting in the development of those types of applications. Common examples of these types of applications include:
**❓ Question Answering over specific documents**
- [Documentation](https://langchain.readthedocs.io/en/latest/use_cases/question_answering.html)
- End-to-end Example: [Question Answering over Notion Database](https://github.com/hwchase17/notion-qa)
**💬 Chatbots**
- [Documentation](https://langchain.readthedocs.io/en/latest/use_cases/chatbots.html)
- End-to-end Example: [Chat-LangChain](https://github.com/hwchase17/chat-langchain)
**🤖 Agents**
- [Documentation](https://langchain.readthedocs.io/en/latest/use_cases/agents.html)
- End-to-end Example: [GPT+WolframAlpha](https://huggingface.co/spaces/JavaFXpert/Chat-GPT-LangChain)
This library is aimed at assisting in the development of those types of applications.
## 📖 Documentation
Please see [here](https://langchain.readthedocs.io/en/latest/?) for full documentation on:
- Getting started (installation, setting up the environment, simple examples)
- Getting started (installation, setting up environment, simple examples)
- How-To examples (demos, integrations, helper functions)
- Reference (full API docs)
Resources (high-level explanation of core concepts)
- Resources (high level explanation of core concepts)
## 🚀 What can this help with?
There are six main areas that LangChain is designed to help with.
There are three main areas (with a forth coming soon) that LangChain is designed to help with.
These are, in increasing order of complexity:
1. LLM and Prompts
2. Chains
3. Agents
4. Memory
**📃 LLMs and Prompts:**
Let's go through these categories and for each one identify key concepts (to clarify terminology) as well as the problems in this area LangChain helps solve.
This includes prompt management, prompt optimization, generic interface for all LLMs, and common utilities for working with LLMs.
### LLMs and Prompts
Calling out to an LLM once is pretty easy, with most of them being behind well documented APIs.
However, there are still some challenges going from that to an application running in production that LangChain attempts to address.
**🔗 Chains:**
**Key Concepts**
- LLM: A large language model, in particular a text-to-text model.
- Prompt: The input to a language model. Typically this is not simply a hardcoded string but rather a combination of a template, some examples, and user input.
- Prompt Template: An object responsible for constructing the final prompt to pass to a LLM.
- Examples: Datapoints that can be included in the prompt in order to give the model more context what to do.
- Few Shot Prompt Template: A subclass of the PromptTemplate class that uses examples.
- Example Selector: A class responsible to selecting examples to use dynamically (depending on user input) in a few shot prompt.
Chains go beyond just a single LLM call, and are sequences of calls (whether to an LLM or a different utility). LangChain provides a standard interface for chains, lots of integrations with other tools, and end-to-end chains for common applications.
**Problems Solved**
- Switching costs: by exposing a standard interface for all the top LLM providers, LangChain makes it easy to switch from one provider to another, whether it be for production use cases or just for testing stuff out.
- Prompt management: managing your prompts is easy when you only have one simple one, but can get tricky when you have a bunch or when they start to get more complex. LangChain provides a standard way for storing, constructing, and referencing prompts.
- Prompt optimization: despite the underlying models getting better and better, there is still currently a need for carefully constructing prompts.
**📚 Data Augmented Generation:**
### Chains
Using an LLM in isolation is fine for some simple applications, but many more complex ones require chaining LLMs - either with eachother or with other experts.
LangChain provides several parts to help with that.
Data Augmented Generation involves specific types of chains that first interact with an external datasource to fetch data to use in the generation step. Examples of this include summarization of long pieces of text and question/answering over specific data sources.
**Key Concepts**
- Tools: APIs designed for assisting with a particular use case (search, databases, Python REPL, etc). Prompt templates, LLMs, and chains can also be considered tools.
- Chains: A combination of multiple tools in a deterministic manner.
**🤖 Agents:**
**Problems Solved**
- Standard interface for working with Chains
- Easy way to construct chains of LLMs
- Lots of integrations with other tools that you may want to use in conjunction with LLMs
- End-to-end chains for common workflows (database question/answer, recursive summarization, etc)
Agents involve an LLM making decisions about which Actions to take, taking that Action, seeing an Observation, and repeating that until done. LangChain provides a standard interface for agents, a selection of agents to choose from, and examples of end to end agents.
### Agents
Some applications will require not just a predetermined chain of calls to LLMs/other tools, but potentially an unknown chain that depends on the user input.
In these types of chains, there is a “agent” which has access to a suite of tools.
Depending on the user input, the agent can then decide which, if any, of these tools to call.
**🧠 Memory:**
**Key Concepts**
- Tools: same as above.
- Agent: An LLM-powered class responsible for determining which tools to use and in what order.
Memory is the concept of persisting state between calls of a chain/agent. LangChain provides a standard interface for memory, a collection of memory implementations, and examples of chains/agents that use memory.
**🧐 Evaluation:**
**Problems Solved**
- Standard agent interfaces
- A selection of powerful agents to choose from
- Common chains that can be used as tools
[BETA] Generative models are notoriously hard to evaluate with traditional metrics. One new way of evaluating them is using language models themselves to do the evaluation. LangChain provides some prompts/chains for assisting in this.
### Memory
By default, Chains and Agents are stateless, meaning that they treat each incoming query independently.
In some applications (chatbots being a GREAT example) it is highly important to remember previous interactions,
both at a short term but also at a long term level. The concept of "Memory" exists to do exactly that.
For more information on these concepts, please see our [full documentation](https://langchain.readthedocs.io/en/latest/?).
**Key Concepts**
- Memory: A class that can be added to an Agent or Chain to (1) pull in memory variables before calling that chain/agent, and (2) create new memories after the chain/agent finishes.
- Memory Variables: Variables returned from a Memory class, to be passed into the chain/agent along with the user input.
## 💁 Contributing
**Problems Solved**
- Standard memory interfaces
- A collection of common memory implementations to choose from
- Common chains/agents that use memory (e.g. chatbots)
As an open source project in a rapidly developing field, we are extremely open to contributions, whether it be in the form of a new feature, improved infra, or better documentation.
## 🤖 Developer Guide
For detailed information on how to contribute, see [here](CONTRIBUTING.md).
To begin developing on this project, first clone to the repo locally.
To install requirements, run `pip install -r requirements.txt`.
This will install all requirements for running the package, examples, linting, formatting, and tests.
Formatting for this project is a combination of [Black](https://black.readthedocs.io/en/stable/) and [isort](https://pycqa.github.io/isort/).
To run formatting for this project, run `make format`.
Linting for this project is a combination of [Black](https://black.readthedocs.io/en/stable/), [isort](https://pycqa.github.io/isort/), [flake8](https://flake8.pycqa.org/en/latest/), and [mypy](http://mypy-lang.org/).
To run linting for this project, run `make lint`.
We recognize linting can be annoying - if you do not want to do it, please contact a project maintainer and they can help you with it. We do not want this to be a blocker for good code getting contributed.
Unit tests cover modular logic that does not require calls to outside apis.
To run unit tests, run `make tests`.
If you add new logic, please add a unit test.
Integration tests cover logic that requires making calls to outside APIs (often integration with other services).
To run integration tests, run `make integration_tests`.
If you add support for a new external API, please add a new integration test.
If you are adding a Jupyter notebook example, you can run `pip install -e .` to build the langchain package from your local changes, so your new logic can be imported into the notebook.
Docs are largely autogenerated by [sphinx](https://www.sphinx-doc.org/en/master/) from the code.
For that reason, we ask that you add good documentation to all classes and methods.
Similar to linting, we recognize documentation can be annoying - if you do not want to do it, please contact a project maintainer and they can help you with it. We do not want this to be a blocker for good code getting contributed.

View File

@ -1,13 +0,0 @@
# python env
PYTHON_VERSION=3.10
# -E flag is required
# comment the following line to only install dev dependencies
POETRY_EXTRA_PACKAGES="-E all"
# at least one group needed
POETRY_DEPENDENCIES="dev,test,lint,typing"
# langchain env. warning: these variables will be baked into the docker image !
OPENAI_API_KEY=${OPENAI_API_KEY:-}
SERPAPI_API_KEY=${SERPAPI_API_KEY:-}

View File

@ -1,53 +0,0 @@
# Using Docker
To quickly get started, run the command `make docker`.
If docker is installed the Makefile will export extra targets in the fomrat `docker.*` to build and run the docker image. Type `make` for a list of available tasks.
There is a basic `docker-compose.yml` in the docker directory.
## Building the development image
Using `make docker` will build the dev image if it does not exist, then drops
you inside the container with the langchain environment available in the shell.
### Customizing the image and installed dependencies
The image is built with a default python version and all extras and dev
dependencies. It can be customized by changing the variables in the [.env](/docker/.env)
file.
If you don't need all the `extra` dependencies a slimmer image can be obtained by
commenting out `POETRY_EXTRA_PACKAGES` in the [.env](docker/.env) file.
### Image caching
The Dockerfile is optimized to cache the poetry install step. A rebuild is triggered when there a change to the source code.
## Example Usage
All commands from langchain's python environment are available by default in the container.
A few examples:
```bash
# run jupyter notebook
docker run --rm -it IMG jupyter notebook
# run ipython
docker run --rm -it IMG ipython
# start web server
docker run --rm -p 8888:8888 IMG python -m http.server 8888
```
## Testing / Linting
Tests and lints are run using your local source directory that is mounted on the volume /src.
Run unit tests in the container with `make docker.test`.
Run the linting and formatting checks with `make docker.lint`.
Note: this task can run in parallel using `make -j4 docker.lint`.

View File

@ -1,104 +0,0 @@
# vim: ft=dockerfile
#
# see also: https://github.com/python-poetry/poetry/discussions/1879
# - with https://github.com/bneijt/poetry-lock-docker
# see https://github.com/thehale/docker-python-poetry
# see https://github.com/max-pfeiffer/uvicorn-poetry
# use by default the slim version of python
ARG PYTHON_IMAGE_TAG=slim
ARG PYTHON_VERSION=${PYTHON_VERSION:-3.11.2}
####################
# Base Environment
####################
FROM python:$PYTHON_VERSION-$PYTHON_IMAGE_TAG AS lchain-base
ARG UID=1000
ARG USERNAME=lchain
ENV USERNAME=$USERNAME
RUN groupadd -g ${UID} $USERNAME
RUN useradd -l -m -u ${UID} -g ${UID} $USERNAME
# used for mounting source code
RUN mkdir /src
VOLUME /src
#######################
## Poetry Builder Image
#######################
FROM lchain-base AS lchain-base-builder
ARG POETRY_EXTRA_PACKAGES=$POETRY_EXTRA_PACKAGES
ARG POETRY_DEPENDENCIES=$POETRY_DEPENDENCIES
ENV HOME=/root
ENV POETRY_HOME=/root/.poetry
ENV POETRY_VIRTUALENVS_IN_PROJECT=false
ENV POETRY_NO_INTERACTION=1
ENV CACHE_DIR=$HOME/.cache
ENV POETRY_CACHE_DIR=$CACHE_DIR/pypoetry
ENV PATH="$POETRY_HOME/bin:$PATH"
WORKDIR /root
RUN apt-get update && \
apt-get install -y \
build-essential \
git \
curl
SHELL ["/bin/bash", "-o", "pipefail", "-c"]
RUN mkdir -p $CACHE_DIR
## setup poetry
RUN curl -sSL -o $CACHE_DIR/pypoetry-installer.py https://install.python-poetry.org/
RUN python3 $CACHE_DIR/pypoetry-installer.py
# # Copy poetry files
COPY poetry.* pyproject.toml ./
RUN mkdir /pip-prefix
RUN poetry export $POETRY_EXTRA_PACKAGES --with $POETRY_DEPENDENCIES -f requirements.txt --output requirements.txt --without-hashes && \
pip install --no-cache-dir --disable-pip-version-check --prefix /pip-prefix -r requirements.txt
# add custom motd message
COPY docker/assets/etc/motd /tmp/motd
RUN cat /tmp/motd > /etc/motd
RUN printf "\n%s\n%s\n" "$(poetry version)" "$(python --version)" >> /etc/motd
###################
## Runtime Image
###################
FROM lchain-base AS lchain
#jupyter port
EXPOSE 8888
COPY docker/assets/entry.sh /entry
RUN chmod +x /entry
COPY --from=lchain-base-builder /etc/motd /etc/motd
COPY --from=lchain-base-builder /usr/bin/git /usr/bin/git
USER ${USERNAME:-lchain}
ENV HOME /home/$USERNAME
WORKDIR /home/$USERNAME
COPY --chown=lchain:lchain --from=lchain-base-builder /pip-prefix $HOME/.local/
COPY . .
SHELL ["/bin/bash", "-o", "pipefail", "-c"]
RUN pip install --no-deps --disable-pip-version-check --no-cache-dir -e .
entrypoint ["/entry"]

View File

@ -1,84 +0,0 @@
#do not call this makefile it is included in the main Makefile
.PHONY: docker docker.jupyter docker.run docker.force_build docker.clean \
docker.test docker.lint docker.lint.mypy docker.lint.black \
docker.lint.isort docker.lint.flake
# read python version from .env file ignoring comments
PYTHON_VERSION := $(shell grep PYTHON_VERSION docker/.env | cut -d '=' -f2)
POETRY_EXTRA_PACKAGES := $(shell grep '^[^#]*POETRY_EXTRA_PACKAGES' docker/.env | cut -d '=' -f2)
POETRY_DEPENDENCIES := $(shell grep 'POETRY_DEPENDENCIES' docker/.env | cut -d '=' -f2)
DOCKER_SRC := $(shell find docker -type f)
DOCKER_IMAGE_NAME = langchain/dev
# SRC is all files matched by the git ls-files command
SRC := $(shell git ls-files -- '*' ':!:docker/*')
# set DOCKER_BUILD_PROGRESS=plain to see detailed build progress
DOCKER_BUILD_PROGRESS ?= auto
# extra message to show when entering the docker container
DOCKER_MOTD := docker/assets/etc/motd
ROOTDIR := $(shell git rev-parse --show-toplevel)
DOCKER_LINT_CMD = docker run --rm -i -u lchain -v $(ROOTDIR):/src $(DOCKER_IMAGE_NAME):$(GIT_HASH)
docker: docker.run
docker.run: docker.build
@echo "Docker image: $(DOCKER_IMAGE_NAME):$(GIT_HASH)"
docker run --rm -it -u lchain -v $(ROOTDIR):/src $(DOCKER_IMAGE_NAME):$(GIT_HASH)
docker.jupyter: docker.build
docker run --rm -it -v $(ROOTDIR):/src $(DOCKER_IMAGE_NAME):$(GIT_HASH) jupyter notebook
docker.build: $(SRC) $(DOCKER_SRC) $(DOCKER_MOTD)
ifdef $(DOCKER_BUILDKIT)
docker buildx build --build-arg PYTHON_VERSION=$(PYTHON_VERSION) \
--build-arg POETRY_EXTRA_PACKAGES=$(POETRY_EXTRA_PACKAGES) \
--build-arg POETRY_DEPENDENCIES=$(POETRY_DEPENDENCIES) \
--progress=$(DOCKER_BUILD_PROGRESS) \
$(BUILD_FLAGS) -f docker/Dockerfile -t $(DOCKER_IMAGE_NAME):$(GIT_HASH) .
else
docker build --build-arg PYTHON_VERSION=$(PYTHON_VERSION) \
--build-arg POETRY_EXTRA_PACKAGES=$(POETRY_EXTRA_PACKAGES) \
--build-arg POETRY_DEPENDENCIES=$(POETRY_DEPENDENCIES) \
$(BUILD_FLAGS) -f docker/Dockerfile -t $(DOCKER_IMAGE_NAME):$(GIT_HASH) .
endif
docker tag $(DOCKER_IMAGE_NAME):$(GIT_HASH) $(DOCKER_IMAGE_NAME):latest
@touch $@ # this prevents docker from rebuilding dependencies that have not
@ # changed. Remove the file `docker/docker.build` to force a rebuild.
docker.force_build: $(DOCKER_SRC)
@rm -f docker.build
@$(MAKE) docker.build BUILD_FLAGS=--no-cache
docker.clean:
docker rmi $(DOCKER_IMAGE_NAME):$(GIT_HASH) $(DOCKER_IMAGE_NAME):latest
docker.test: docker.build
docker run --rm -it -u lchain -v $(ROOTDIR):/src $(DOCKER_IMAGE_NAME):$(GIT_HASH) \
pytest /src/tests/unit_tests
# this assumes that the docker image has been built
docker.lint: docker.lint.mypy docker.lint.black docker.lint.isort \
docker.lint.flake
# these can run in parallel with -j[njobs]
docker.lint.mypy:
@$(DOCKER_LINT_CMD) mypy /src
@printf "\t%s\n" "mypy ... "
docker.lint.black:
@$(DOCKER_LINT_CMD) black /src --check
@printf "\t%s\n" "black ... "
docker.lint.isort:
@$(DOCKER_LINT_CMD) isort /src --check
@printf "\t%s\n" "isort ... "
docker.lint.flake:
@$(DOCKER_LINT_CMD) flake8 /src
@printf "\t%s\n" "flake8 ... "

View File

@ -1,10 +0,0 @@
#!/usr/bin/env bash
export PATH=$HOME/.local/bin:$PATH
if [ -z "$1" ]; then
cat /etc/motd
exec /bin/bash
fi
exec "$@"

View File

@ -1,8 +0,0 @@
All dependencies have been installed in the current shell. There is no
virtualenv or a need for `poetry` inside the container.
Running the command `make docker.run` at the root directory of the project will
build the container the first time. On the next runs it will use the cached
image. A rebuild will happen when changes are made to the source code.
You local source directory has been mounted to the /src directory.

View File

@ -1,17 +0,0 @@
version: "3.7"
services:
langchain:
hostname: langchain
image: langchain/dev:latest
build:
context: ../
dockerfile: docker/Dockerfile
args:
PYTHON_VERSION: ${PYTHON_VERSION}
POETRY_EXTRA_PACKAGES: ${POETRY_EXTRA_PACKAGES}
POETRY_DEPENDENCIES: ${POETRY_DEPENDENCIES}
restart: unless-stopped
ports:
- 127.0.0.1:8888:8888

View File

@ -3,7 +3,7 @@
# You can set these variables from the command line, and also
# from the environment for the first two.
SPHINXOPTS ?=
SPHINXOPTS ?=
SPHINXBUILD ?= sphinx-build
SPHINXAUTOBUILD ?= sphinx-autobuild
SOURCEDIR = .

Binary file not shown.

Before

Width:  |  Height:  |  Size: 235 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 148 KiB

View File

@ -1,13 +0,0 @@
pre {
white-space: break-spaces;
}
@media (min-width: 1200px) {
.container,
.container-lg,
.container-md,
.container-sm,
.container-xl {
max-width: 2560px !important;
}
}

View File

@ -15,21 +15,16 @@
# import sys
# sys.path.insert(0, os.path.abspath('.'))
import toml
with open("../pyproject.toml") as f:
data = toml.load(f)
import langchain
# -- Project information -----------------------------------------------------
project = "🦜🔗 LangChain"
project = "LangChain"
copyright = "2022, Harrison Chase"
author = "Harrison Chase"
version = data["tool"]["poetry"]["version"]
release = version
html_title = project + " " + version
version = langchain.__version__
release = langchain.__version__
# -- General configuration ---------------------------------------------------
@ -44,11 +39,11 @@ extensions = [
"sphinx.ext.napoleon",
"sphinx.ext.viewcode",
"sphinxcontrib.autodoc_pydantic",
"myst_nb",
"myst_parser",
"nbsphinx",
"sphinx_panels",
"IPython.sphinxext.ipython_console_highlighting",
]
source_suffix = [".ipynb", ".html", ".md", ".rst"]
autodoc_pydantic_model_show_json = False
autodoc_pydantic_field_list_validators = False
@ -75,13 +70,8 @@ exclude_patterns = ["_build", "Thumbs.db", ".DS_Store"]
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
#
html_theme = "sphinx_book_theme"
html_theme_options = {
"path_to_docs": "docs",
"repository_url": "https://github.com/hwchase17/langchain",
"use_repository_button": True,
}
html_theme = "sphinx_rtd_theme"
# html_theme = "sphinx_typlog_theme"
html_context = {
"display_github": True, # Integrate GitHub
@ -94,12 +84,4 @@ html_context = {
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ["_static"]
# These paths are either relative to html_static_path
# or fully qualified paths (eg. https://...)
html_css_files = [
"css/custom.css",
]
nb_execution_mode = "off"
myst_enable_extensions = ["colon_fence"]
html_static_path: list = []

View File

@ -1,39 +0,0 @@
# Deployments
So you've made a really cool chain - now what? How do you deploy it and make it easily sharable with the world?
This section covers several options for that.
Note that these are meant as quick deployment options for prototypes and demos, and not for production systems.
If you are looking for help with deployment of a production system, please contact us directly.
What follows is a list of template GitHub repositories aimed that are intended to be
very easy to fork and modify to use your chain.
This is far from an exhaustive list of options, and we are EXTREMELY open to contributions here.
## [Streamlit](https://github.com/hwchase17/langchain-streamlit-template)
This repo serves as a template for how to deploy a LangChain with Streamlit.
It implements a chatbot interface.
It also contains instructions for how to deploy this app on the Streamlit platform.
## [Gradio (on Hugging Face)](https://github.com/hwchase17/langchain-gradio-template)
This repo serves as a template for how deploy a LangChain with Gradio.
It implements a chatbot interface, with a "Bring-Your-Own-Token" approach (nice for not wracking up big bills).
It also contains instructions for how to deploy this app on the Hugging Face platform.
This is heavily influenced by James Weaver's [excellent examples](https://huggingface.co/JavaFXpert).
## [Beam](https://github.com/slai-labs/get-beam/tree/main/examples/langchain-question-answering)
This repo serves as a template for how deploy a LangChain with [Beam](https://beam.cloud).
It implements a Question Answering app and contains instructions for deploying the app as a serverless REST API.
## [Vercel](https://github.com/homanp/vercel-langchain)
A minimal example on how to run LangChain on Vercel using Flask.
## [SteamShip](https://github.com/steamship-core/steamship-langchain/)
This repository contains LangChain adapters for Steamship, enabling LangChain developers to rapidly deploy their apps on Steamship.
This includes: production ready endpoints, horizontal scaling across dependencies, persistant storage of app state, multi-tenancy support, etc.

View File

@ -1,10 +0,0 @@
LangChain Ecosystem
===================
Guides for how other companies/products can be used with LangChain
.. toctree::
:maxdepth: 1
:glob:
ecosystem/*

View File

@ -1,16 +0,0 @@
# AI21 Labs
This page covers how to use the AI21 ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific AI21 wrappers.
## Installation and Setup
- Get an AI21 api key and set it as an environment variable (`AI21_API_KEY`)
## Wrappers
### LLM
There exists an AI21 LLM wrapper, which you can access with
```python
from langchain.llms import AI21
```

View File

@ -1,25 +0,0 @@
# AtlasDB
This page covers how to Nomic's Atlas ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific Atlas wrappers.
## Installation and Setup
- Install the Python package with `pip install nomic`
- Nomic is also included in langchains poetry extras `poetry install -E all`
-
## Wrappers
### VectorStore
There exists a wrapper around the Atlas neural database, allowing you to use it as a vectorstore.
This vectorstore also gives you full access to the underlying AtlasProject object, which will allow you to use the full range of Atlas map interactions, such as bulk tagging and automatic topic modeling.
Please see [the Nomic docs](https://docs.nomic.ai/atlas_api.html) for more detailed information.
To import this vectorstore:
```python
from langchain.vectorstores import AtlasDB
```
For a more detailed walkthrough of the Chroma wrapper, see [this notebook](../modules/indexes/examples/vectorstores.ipynb)

View File

@ -1,79 +0,0 @@
# Banana
This page covers how to use the Banana ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific Banana wrappers.
## Installation and Setup
- Install with `pip3 install banana-dev`
- Get an Banana api key and set it as an environment variable (`BANANA_API_KEY`)
## Define your Banana Template
If you want to use an available language model template you can find one [here](https://app.banana.dev/templates/conceptofmind/serverless-template-palmyra-base).
This template uses the Palmyra-Base model by [Writer](https://writer.com/product/api/).
You can check out an example Banana repository [here](https://github.com/conceptofmind/serverless-template-palmyra-base).
## Build the Banana app
Banana Apps must include the "output" key in the return json.
There is a rigid response structure.
```python
# Return the results as a dictionary
result = {'output': result}
```
An example inference function would be:
```python
def inference(model_inputs:dict) -> dict:
global model
global tokenizer
# Parse out your arguments
prompt = model_inputs.get('prompt', None)
if prompt == None:
return {'message': "No prompt provided"}
# Run the model
input_ids = tokenizer.encode(prompt, return_tensors='pt').cuda()
output = model.generate(
input_ids,
max_length=100,
do_sample=True,
top_k=50,
top_p=0.95,
num_return_sequences=1,
temperature=0.9,
early_stopping=True,
no_repeat_ngram_size=3,
num_beams=5,
length_penalty=1.5,
repetition_penalty=1.5,
bad_words_ids=[[tokenizer.encode(' ', add_prefix_space=True)[0]]]
)
result = tokenizer.decode(output[0], skip_special_tokens=True)
# Return the results as a dictionary
result = {'output': result}
return result
```
You can find a full example of a Banana app [here](https://github.com/conceptofmind/serverless-template-palmyra-base/blob/main/app.py).
## Wrappers
### LLM
There exists an Banana LLM wrapper, which you can access with
```python
from langchain.llms import Banana
```
You need to provide a model key located in the dashboard:
```python
llm = Banana(model_key="YOUR_MODEL_KEY")
```

View File

@ -1,17 +0,0 @@
# CerebriumAI
This page covers how to use the CerebriumAI ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific CerebriumAI wrappers.
## Installation and Setup
- Install with `pip install cerebrium`
- Get an CerebriumAI api key and set it as an environment variable (`CEREBRIUMAI_API_KEY`)
## Wrappers
### LLM
There exists an CerebriumAI LLM wrapper, which you can access with
```python
from langchain.llms import CerebriumAI
```

View File

@ -1,20 +0,0 @@
# Chroma
This page covers how to use the Chroma ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific Chroma wrappers.
## Installation and Setup
- Install the Python package with `pip install chromadb`
## Wrappers
### VectorStore
There exists a wrapper around Chroma vector databases, allowing you to use it as a vectorstore,
whether for semantic search or example selection.
To import this vectorstore:
```python
from langchain.vectorstores import Chroma
```
For a more detailed walkthrough of the Chroma wrapper, see [this notebook](../modules/indexes/examples/vectorstores.ipynb)

View File

@ -1,25 +0,0 @@
# Cohere
This page covers how to use the Cohere ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific Cohere wrappers.
## Installation and Setup
- Install the Python SDK with `pip install cohere`
- Get an Cohere api key and set it as an environment variable (`COHERE_API_KEY`)
## Wrappers
### LLM
There exists an Cohere LLM wrapper, which you can access with
```python
from langchain.llms import Cohere
```
### Embeddings
There exists an Cohere Embeddings wrapper, which you can access with
```python
from langchain.embeddings import CohereEmbeddings
```
For a more detailed walkthrough of this, see [this notebook](../modules/indexes/examples/embeddings.ipynb)

View File

@ -1,17 +0,0 @@
# DeepInfra
This page covers how to use the DeepInfra ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific DeepInfra wrappers.
## Installation and Setup
- Get your DeepInfra api key from this link [here](https://deepinfra.com/).
- Get an DeepInfra api key and set it as an environment variable (`DEEPINFRA_API_TOKEN`)
## Wrappers
### LLM
There exists an DeepInfra LLM wrapper, which you can access with
```python
from langchain.llms import DeepInfra
```

View File

@ -1,25 +0,0 @@
# Deep Lake
This page covers how to use the Deep Lake ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific Deep Lake wrappers. For more information.
1. Here is [whitepaper](https://www.deeplake.ai/whitepaper) and [academic paper](https://arxiv.org/pdf/2209.10785.pdf) for Deep Lake
2. Here is a set of additional resources available for review: [Deep Lake](https://github.com/activeloopai/deeplake), [Getting Started](https://docs.activeloop.ai/getting-started) and [Tutorials](https://docs.activeloop.ai/hub-tutorials)
## Installation and Setup
- Install the Python package with `pip install deeplake`
## Wrappers
### VectorStore
There exists a wrapper around Deep Lake, a data lake for Deep Learning applications, allowing you to use it as a vectorstore (for now), whether for semantic search or example selection.
To import this vectorstore:
```python
from langchain.vectorstores import DeepLake
```
For a more detailed walkthrough of the Deep Lake wrapper, see [this notebook](../modules/indexes/vectorstore_examples/deeplake.ipynb)

View File

@ -1,16 +0,0 @@
# ForefrontAI
This page covers how to use the ForefrontAI ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific ForefrontAI wrappers.
## Installation and Setup
- Get an ForefrontAI api key and set it as an environment variable (`FOREFRONTAI_API_KEY`)
## Wrappers
### LLM
There exists an ForefrontAI LLM wrapper, which you can access with
```python
from langchain.llms import ForefrontAI
```

View File

@ -1,32 +0,0 @@
# Google Search Wrapper
This page covers how to use the Google Search API within LangChain.
It is broken into two parts: installation and setup, and then references to the specific Google Search wrapper.
## Installation and Setup
- Install requirements with `pip install google-api-python-client`
- Set up a Custom Search Engine, following [these instructions](https://stackoverflow.com/questions/37083058/programmatically-searching-google-in-python-using-custom-search)
- Get an API Key and Custom Search Engine ID from the previous step, and set them as environment variables `GOOGLE_API_KEY` and `GOOGLE_CSE_ID` respectively
## Wrappers
### Utility
There exists a GoogleSearchAPIWrapper utility which wraps this API. To import this utility:
```python
from langchain.utilities import GoogleSearchAPIWrapper
```
For a more detailed walkthrough of this wrapper, see [this notebook](../modules/utils/examples/google_search.ipynb).
### Tool
You can also easily load this wrapper as a Tool (to use with an Agent).
You can do this with:
```python
from langchain.agents import load_tools
tools = load_tools(["google-search"])
```
For more information on this, see [this page](../modules/agents/tools.md)

View File

@ -1,71 +0,0 @@
# Google Serper Wrapper
This page covers how to use the [Serper](https://serper.dev) Google Search API within LangChain. Serper is a low-cost Google Search API that can be used to add answer box, knowledge graph, and organic results data from Google Search.
It is broken into two parts: setup, and then references to the specific Google Serper wrapper.
## Setup
- Go to [serper.dev](https://serper.dev) to sign up for a free account
- Get the api key and set it as an environment variable (`SERPER_API_KEY`)
## Wrappers
### Utility
There exists a GoogleSerperAPIWrapper utility which wraps this API. To import this utility:
```python
from langchain.utilities import GoogleSerperAPIWrapper
```
You can use it as part of a Self Ask chain:
```python
from langchain.utilities import GoogleSerperAPIWrapper
from langchain.llms.openai import OpenAI
from langchain.agents import initialize_agent, Tool
import os
os.environ["SERPER_API_KEY"] = ""
os.environ['OPENAI_API_KEY'] = ""
llm = OpenAI(temperature=0)
search = GoogleSerperAPIWrapper()
tools = [
Tool(
name="Intermediate Answer",
func=search.run
)
]
self_ask_with_search = initialize_agent(tools, llm, agent="self-ask-with-search", verbose=True)
self_ask_with_search.run("What is the hometown of the reigning men's U.S. Open champion?")
```
#### Output
```
Entering new AgentExecutor chain...
Yes.
Follow up: Who is the reigning men's U.S. Open champion?
Intermediate answer: Current champions Carlos Alcaraz, 2022 men's singles champion.
Follow up: Where is Carlos Alcaraz from?
Intermediate answer: El Palmar, Spain
So the final answer is: El Palmar, Spain
> Finished chain.
'El Palmar, Spain'
```
For a more detailed walkthrough of this wrapper, see [this notebook](../modules/utils/examples/google_serper.ipynb).
### Tool
You can also easily load this wrapper as a Tool (to use with an Agent).
You can do this with:
```python
from langchain.agents import load_tools
tools = load_tools(["google-serper"])
```
For more information on this, see [this page](../modules/agents/tools.md)

View File

@ -1,23 +0,0 @@
# GooseAI
This page covers how to use the GooseAI ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific GooseAI wrappers.
## Installation and Setup
- Install the Python SDK with `pip install openai`
- Get your GooseAI api key from this link [here](https://goose.ai/).
- Set the environment variable (`GOOSEAI_API_KEY`).
```python
import os
os.environ["GOOSEAI_API_KEY"] = "YOUR_API_KEY"
```
## Wrappers
### LLM
There exists an GooseAI LLM wrapper, which you can access with:
```python
from langchain.llms import GooseAI
```

View File

@ -1,38 +0,0 @@
# Graphsignal
This page covers how to use the Graphsignal to trace and monitor LangChain.
## Installation and Setup
- Install the Python library with `pip install graphsignal`
- Create free Graphsignal account [here](https://graphsignal.com)
- Get an API key and set it as an environment variable (`GRAPHSIGNAL_API_KEY`)
## Tracing and Monitoring
Graphsignal automatically instruments and starts tracing and monitoring chains. Traces, metrics and errors are then available in your [Graphsignal dashboard](https://app.graphsignal.com/). No prompts or other sensitive data are sent to Graphsignal cloud, only statistics and metadata.
Initialize the tracer by providing a deployment name:
```python
import graphsignal
graphsignal.configure(deployment='my-langchain-app-prod')
```
In order to trace full runs and see a breakdown by chains and tools, you can wrap the calling routine or use a decorator:
```python
with graphsignal.start_trace('my-chain'):
chain.run("some initial text")
```
Optionally, enable profiling to record function-level statistics for each trace.
```python
with graphsignal.start_trace(
'my-chain', options=graphsignal.TraceOptions(enable_profiling=True)):
chain.run("some initial text")
```
See the [Quick Start](https://graphsignal.com/docs/guides/quick-start/) guide for complete setup instructions.

View File

@ -1,19 +0,0 @@
# Hazy Research
This page covers how to use the Hazy Research ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific Hazy Research wrappers.
## Installation and Setup
- To use the `manifest`, install it with `pip install manifest-ml`
## Wrappers
### LLM
There exists an LLM wrapper around Hazy Research's `manifest` library.
`manifest` is a python library which is itself a wrapper around many model providers, and adds in caching, history, and more.
To use this wrapper:
```python
from langchain.llms.manifest import ManifestWrapper
```

View File

@ -1,53 +0,0 @@
# Helicone
This page covers how to use the [Helicone](https://helicone.ai) within LangChain.
## What is Helicone?
Helicone is an [open source](https://github.com/Helicone/helicone) observability platform that proxies your OpenAI traffic and provides you key insights into your spend, latency and usage.
![Helicone](../_static/HeliconeDashboard.png)
## Quick start
With your LangChain environment you can just add the following parameter.
```bash
export OPENAI_API_BASE="https://oai.hconeai.com/v1"
```
Now head over to [helicone.ai](https://helicone.ai/onboarding?step=2) to create your account, and add your OpenAI API key within our dashboard to view your logs.
![Helicone](../_static/HeliconeKeys.png)
## How to enable Helicone caching
```python
from langchain.llms import OpenAI
import openai
openai.api_base = "https://oai.hconeai.com/v1"
llm = OpenAI(temperature=0.9, headers={"Helicone-Cache-Enabled": "true"})
text = "What is a helicone?"
print(llm(text))
```
[Helicone caching docs](https://docs.helicone.ai/advanced-usage/caching)
## How to use Helicone custom properties
```python
from langchain.llms import OpenAI
import openai
openai.api_base = "https://oai.hconeai.com/v1"
llm = OpenAI(temperature=0.9, headers={
"Helicone-Property-Session": "24",
"Helicone-Property-Conversation": "support_issue_2",
"Helicone-Property-App": "mobile",
})
text = "What is a helicone?"
print(llm(text))
```
[Helicone property docs](https://docs.helicone.ai/advanced-usage/custom-properties)

View File

@ -1,69 +0,0 @@
# Hugging Face
This page covers how to use the Hugging Face ecosystem (including the [Hugging Face Hub](https://huggingface.co)) within LangChain.
It is broken into two parts: installation and setup, and then references to specific Hugging Face wrappers.
## Installation and Setup
If you want to work with the Hugging Face Hub:
- Install the Hub client library with `pip install huggingface_hub`
- Create a Hugging Face account (it's free!)
- Create an [access token](https://huggingface.co/docs/hub/security-tokens) and set it as an environment variable (`HUGGINGFACEHUB_API_TOKEN`)
If you want work with the Hugging Face Python libraries:
- Install `pip install transformers` for working with models and tokenizers
- Install `pip install datasets` for working with datasets
## Wrappers
### LLM
There exists two Hugging Face LLM wrappers, one for a local pipeline and one for a model hosted on Hugging Face Hub.
Note that these wrappers only work for models that support the following tasks: [`text2text-generation`](https://huggingface.co/models?library=transformers&pipeline_tag=text2text-generation&sort=downloads), [`text-generation`](https://huggingface.co/models?library=transformers&pipeline_tag=text-classification&sort=downloads)
To use the local pipeline wrapper:
```python
from langchain.llms import HuggingFacePipeline
```
To use a the wrapper for a model hosted on Hugging Face Hub:
```python
from langchain.llms import HuggingFaceHub
```
For a more detailed walkthrough of the Hugging Face Hub wrapper, see [this notebook](../modules/llms/integrations/huggingface_hub.ipynb)
### Embeddings
There exists two Hugging Face Embeddings wrappers, one for a local model and one for a model hosted on Hugging Face Hub.
Note that these wrappers only work for [`sentence-transformers` models](https://huggingface.co/models?library=sentence-transformers&sort=downloads).
To use the local pipeline wrapper:
```python
from langchain.embeddings import HuggingFaceEmbeddings
```
To use a the wrapper for a model hosted on Hugging Face Hub:
```python
from langchain.embeddings import HuggingFaceHubEmbeddings
```
For a more detailed walkthrough of this, see [this notebook](../modules/indexes/examples/embeddings.ipynb)
### Tokenizer
There are several places you can use tokenizers available through the `transformers` package.
By default, it is used to count tokens for all LLMs.
You can also use it to count tokens when splitting documents with
```python
from langchain.text_splitter import CharacterTextSplitter
CharacterTextSplitter.from_huggingface_tokenizer(...)
```
For a more detailed walkthrough of this, see [this notebook](../modules/indexes/examples/textsplitter.ipynb)
### Datasets
The Hugging Face Hub has lots of great [datasets](https://huggingface.co/datasets) that can be used to evaluate your LLM chains.
For a detailed walkthrough of how to use them to do so, see [this notebook](../use_cases/evaluation/huggingface_datasets.ipynb)

View File

@ -1,66 +0,0 @@
# Modal
This page covers how to use the Modal ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific Modal wrappers.
## Installation and Setup
- Install with `pip install modal-client`
- Run `modal token new`
## Define your Modal Functions and Webhooks
You must include a prompt. There is a rigid response structure.
```python
class Item(BaseModel):
prompt: str
@stub.webhook(method="POST")
def my_webhook(item: Item):
return {"prompt": my_function.call(item.prompt)}
```
An example with GPT2:
```python
from pydantic import BaseModel
import modal
stub = modal.Stub("example-get-started")
volume = modal.SharedVolume().persist("gpt2_model_vol")
CACHE_PATH = "/root/model_cache"
@stub.function(
gpu="any",
image=modal.Image.debian_slim().pip_install(
"tokenizers", "transformers", "torch", "accelerate"
),
shared_volumes={CACHE_PATH: volume},
retries=3,
)
def run_gpt2(text: str):
from transformers import GPT2Tokenizer, GPT2LMHeadModel
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
model = GPT2LMHeadModel.from_pretrained('gpt2')
encoded_input = tokenizer(text, return_tensors='pt').input_ids
output = model.generate(encoded_input, max_length=50, do_sample=True)
return tokenizer.decode(output[0], skip_special_tokens=True)
class Item(BaseModel):
prompt: str
@stub.webhook(method="POST")
def get_text(item: Item):
return {"prompt": run_gpt2.call(item.prompt)}
```
## Wrappers
### LLM
There exists an Modal LLM wrapper, which you can access with
```python
from langchain.llms import Modal
```

View File

@ -1,17 +0,0 @@
# NLPCloud
This page covers how to use the NLPCloud ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific NLPCloud wrappers.
## Installation and Setup
- Install the Python SDK with `pip install nlpcloud`
- Get an NLPCloud api key and set it as an environment variable (`NLPCLOUD_API_KEY`)
## Wrappers
### LLM
There exists an NLPCloud LLM wrapper, which you can access with
```python
from langchain.llms import NLPCloud
```

View File

@ -1,55 +0,0 @@
# OpenAI
This page covers how to use the OpenAI ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific OpenAI wrappers.
## Installation and Setup
- Install the Python SDK with `pip install openai`
- Get an OpenAI api key and set it as an environment variable (`OPENAI_API_KEY`)
- If you want to use OpenAI's tokenizer (only available for Python 3.9+), install it with `pip install tiktoken`
## Wrappers
### LLM
There exists an OpenAI LLM wrapper, which you can access with
```python
from langchain.llms import OpenAI
```
If you are using a model hosted on Azure, you should use different wrapper for that:
```python
from langchain.llms import AzureOpenAI
```
For a more detailed walkthrough of the Azure wrapper, see [this notebook](../modules/llms/integrations/azure_openai_example.ipynb)
### Embeddings
There exists an OpenAI Embeddings wrapper, which you can access with
```python
from langchain.embeddings import OpenAIEmbeddings
```
For a more detailed walkthrough of this, see [this notebook](../modules/indexes/examples/embeddings.ipynb)
### Tokenizer
There are several places you can use the `tiktoken` tokenizer. By default, it is used to count tokens
for OpenAI LLMs.
You can also use it to count tokens when splitting documents with
```python
from langchain.text_splitter import CharacterTextSplitter
CharacterTextSplitter.from_tiktoken_encoder(...)
```
For a more detailed walkthrough of this, see [this notebook](../modules/indexes/examples/textsplitter.ipynb)
### Moderation
You can also access the OpenAI content moderation endpoint with
```python
from langchain.chains import OpenAIModerationChain
```
For a more detailed walkthrough of this, see [this notebook](../modules/chains/examples/moderation.ipynb)

View File

@ -1,21 +0,0 @@
# OpenSearch
This page covers how to use the OpenSearch ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific OpenSearch wrappers.
## Installation and Setup
- Install the Python package with `pip install opensearch-py`
## Wrappers
### VectorStore
There exists a wrapper around OpenSearch vector databases, allowing you to use it as a vectorstore
for semantic search using approximate vector search powered by lucene, nmslib and faiss engines
or using painless scripting and script scoring functions for bruteforce vector search.
To import this vectorstore:
```python
from langchain.vectorstores import OpenSearchVectorSearch
```
For a more detailed walkthrough of the OpenSearch wrapper, see [this notebook](../modules/indexes/vectorstore_examples/opensearch.ipynb)

View File

@ -1,17 +0,0 @@
# Petals
This page covers how to use the Petals ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific Petals wrappers.
## Installation and Setup
- Install with `pip install petals`
- Get a Hugging Face api key and set it as an environment variable (`HUGGINGFACE_API_KEY`)
## Wrappers
### LLM
There exists an Petals LLM wrapper, which you can access with
```python
from langchain.llms import Petals
```

View File

@ -1,20 +0,0 @@
# Pinecone
This page covers how to use the Pinecone ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific Pinecone wrappers.
## Installation and Setup
- Install the Python SDK with `pip install pinecone-client`
## Wrappers
### VectorStore
There exists a wrapper around Pinecone indexes, allowing you to use it as a vectorstore,
whether for semantic search or example selection.
To import this vectorstore:
```python
from langchain.vectorstores import Pinecone
```
For a more detailed walkthrough of the Pinecone wrapper, see [this notebook](../modules/indexes/examples/vectorstores.ipynb)

View File

@ -1,31 +0,0 @@
# PromptLayer
This page covers how to use [PromptLayer](https://www.promptlayer.com) within LangChain.
It is broken into two parts: installation and setup, and then references to specific PromptLayer wrappers.
## Installation and Setup
If you want to work with PromptLayer:
- Install the promptlayer python library `pip install promptlayer`
- Create a PromptLayer account
- Create an api token and set it as an environment variable (`PROMPTLAYER_API_KEY`)
## Wrappers
### LLM
There exists an PromptLayer OpenAI LLM wrapper, which you can access with
```python
from langchain.llms import PromptLayerOpenAI
```
To tag your requests, use the argument `pl_tags` when instanializing the LLM
```python
from langchain.llms import PromptLayerOpenAI
llm = PromptLayerOpenAI(pl_tags=["langchain-requests", "chatbot"])
```
This LLM is identical to the [OpenAI LLM](./openai), except that
- all your requests will be logged to your PromptLayer account
- you can add `pl_tags` when instantializing to tag your requests on PromptLayer

View File

@ -1,31 +0,0 @@
# Runhouse
This page covers how to use the [Runhouse](https://github.com/run-house/runhouse) ecosystem within LangChain.
It is broken into three parts: installation and setup, LLMs, and Embeddings.
## Installation and Setup
- Install the Python SDK with `pip install runhouse`
- If you'd like to use on-demand cluster, check your cloud credentials with `sky check`
## Self-hosted LLMs
For a basic self-hosted LLM, you can use the `SelfHostedHuggingFaceLLM` class. For more
custom LLMs, you can use the `SelfHostedPipeline` parent class.
```python
from langchain.llms import SelfHostedPipeline, SelfHostedHuggingFaceLLM
```
For a more detailed walkthrough of the Self-hosted LLMs, see [this notebook](../modules/llms/integrations/self_hosted_examples.ipynb)
## Self-hosted Embeddings
There are several ways to use self-hosted embeddings with LangChain via Runhouse.
For a basic self-hosted embedding from a Hugging Face Transformers model, you can use
the `SelfHostedEmbedding` class.
```python
from langchain.llms import SelfHostedPipeline, SelfHostedHuggingFaceLLM
```
For a more detailed walkthrough of the Self-hosted Embeddings, see [this notebook](../modules/indexes/examples/embeddings.ipynb)
##

View File

@ -1,35 +0,0 @@
# SearxNG Search API
This page covers how to use the SearxNG search API within LangChain.
It is broken into two parts: installation and setup, and then references to the specific SearxNG API wrapper.
## Installation and Setup
- You can find a list of public SearxNG instances [here](https://searx.space/).
- It recommended to use a self-hosted instance to avoid abuse on the public instances. Also note that public instances often have a limit on the number of requests.
- To run a self-hosted instance see [this page](https://searxng.github.io/searxng/admin/installation.html) for more information.
- To use the tool you need to provide the searx host url by:
1. passing the named parameter `searx_host` when creating the instance.
2. exporting the environment variable `SEARXNG_HOST`.
## Wrappers
### Utility
You can use the wrapper to get results from a SearxNG instance.
```python
from langchain.utilities import SearxSearchWrapper
```
### Tool
You can also easily load this wrapper as a Tool (to use with an Agent).
You can do this with:
```python
from langchain.agents import load_tools
tools = load_tools(["searx-search"], searx_host="https://searx.example.com")
```
For more information on this, see [this page](../modules/agents/tools.md)

View File

@ -1,31 +0,0 @@
# SerpAPI
This page covers how to use the SerpAPI search APIs within LangChain.
It is broken into two parts: installation and setup, and then references to the specific SerpAPI wrapper.
## Installation and Setup
- Install requirements with `pip install google-search-results`
- Get a SerpAPI api key and either set it as an environment variable (`SERPAPI_API_KEY`)
## Wrappers
### Utility
There exists a SerpAPI utility which wraps this API. To import this utility:
```python
from langchain.utilities import SerpAPIWrapper
```
For a more detailed walkthrough of this wrapper, see [this notebook](../modules/utils/examples/serpapi.ipynb).
### Tool
You can also easily load this wrapper as a Tool (to use with an Agent).
You can do this with:
```python
from langchain.agents import load_tools
tools = load_tools(["serpapi"])
```
For more information on this, see [this page](../modules/agents/tools.md)

View File

@ -1,17 +0,0 @@
# StochasticAI
This page covers how to use the StochasticAI ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific StochasticAI wrappers.
## Installation and Setup
- Install with `pip install stochasticx`
- Get an StochasticAI api key and set it as an environment variable (`STOCHASTICAI_API_KEY`)
## Wrappers
### LLM
There exists an StochasticAI LLM wrapper, which you can access with
```python
from langchain.llms import StochasticAI
```

View File

@ -1,41 +0,0 @@
# Unstructured
This page covers how to use the [`unstructured`](https://github.com/Unstructured-IO/unstructured)
ecosystem within LangChain. The `unstructured` package from
[Unstructured.IO](https://www.unstructured.io/) extracts clean text from raw source documents like
PDFs and Word documents.
This page is broken into two parts: installation and setup, and then references to specific
`unstructured` wrappers.
## Installation and Setup
- Install the Python SDK with `pip install "unstructured[local-inference]"`
- Install the following system dependencies if they are not already available on your system.
Depending on what document types you're parsing, you may not need all of these.
- `libmagic-dev`
- `poppler-utils`
- `tesseract-ocr`
- `libreoffice`
- If you are parsing PDFs, run the following to install the `detectron2` model, which
`unstructured` uses for layout detection:
- `pip install "detectron2@git+https://github.com/facebookresearch/detectron2.git@v0.6#egg=detectron2"`
## Wrappers
### Data Loaders
The primary `unstructured` wrappers within `langchain` are data loaders. The following
shows how to use the most basic unstructured data loader. There are other file-specific
data loaders available in the `langchain.document_loaders` module.
```python
from langchain.document_loaders import UnstructuredFileLoader
loader = UnstructuredFileLoader("state_of_the_union.txt")
loader.load()
```
If you instantiate the loader with `UnstructuredFileLoader(mode="elements")`, the loader
will track additional metadata like the page number and text type (i.e. title, narrative text)
when that information is available.

View File

@ -1,33 +0,0 @@
# Weaviate
This page covers how to use the Weaviate ecosystem within LangChain.
What is Weaviate?
**Weaviate in a nutshell:**
- Weaviate is an open-source database of the type vector search engine.
- Weaviate allows you to store JSON documents in a class property-like fashion while attaching machine learning vectors to these documents to represent them in vector space.
- Weaviate can be used stand-alone (aka bring your vectors) or with a variety of modules that can do the vectorization for you and extend the core capabilities.
- Weaviate has a GraphQL-API to access your data easily.
- We aim to bring your vector search set up to production to query in mere milliseconds (check our [open source benchmarks](https://weaviate.io/developers/weaviate/current/benchmarks/) to see if Weaviate fits your use case).
- Get to know Weaviate in the [basics getting started guide](https://weaviate.io/developers/weaviate/current/core-knowledge/basics.html) in under five minutes.
**Weaviate in detail:**
Weaviate is a low-latency vector search engine with out-of-the-box support for different media types (text, images, etc.). It offers Semantic Search, Question-Answer Extraction, Classification, Customizable Models (PyTorch/TensorFlow/Keras), etc. Built from scratch in Go, Weaviate stores both objects and vectors, allowing for combining vector search with structured filtering and the fault tolerance of a cloud-native database. It is all accessible through GraphQL, REST, and various client-side programming languages.
## Installation and Setup
- Install the Python SDK with `pip install weaviate-client`
## Wrappers
### VectorStore
There exists a wrapper around Weaviate indexes, allowing you to use it as a vectorstore,
whether for semantic search or example selection.
To import this vectorstore:
```python
from langchain.vectorstores import Weaviate
```
For a more detailed walkthrough of the Weaviate wrapper, see [this notebook](../modules/indexes/examples/vectorstores.ipynb)

View File

@ -1,34 +0,0 @@
# Wolfram Alpha Wrapper
This page covers how to use the Wolfram Alpha API within LangChain.
It is broken into two parts: installation and setup, and then references to specific Wolfram Alpha wrappers.
## Installation and Setup
- Install requirements with `pip install wolframalpha`
- Go to wolfram alpha and sign up for a developer account [here](https://developer.wolframalpha.com/)
- Create an app and get your APP ID
- Set your APP ID as an environment variable `WOLFRAM_ALPHA_APPID`
## Wrappers
### Utility
There exists a WolframAlphaAPIWrapper utility which wraps this API. To import this utility:
```python
from langchain.utilities.wolfram_alpha import WolframAlphaAPIWrapper
```
For a more detailed walkthrough of this wrapper, see [this notebook](../modules/utils/examples/wolfram_alpha.ipynb).
### Tool
You can also easily load this wrapper as a Tool (to use with an Agent).
You can do this with:
```python
from langchain.agents import load_tools
tools = load_tools(["wolfram-alpha"])
```
For more information on this, see [this page](../modules/agents/tools.md)

View File

@ -1,16 +0,0 @@
# Writer
This page covers how to use the Writer ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific Writer wrappers.
## Installation and Setup
- Get an Writer api key and set it as an environment variable (`WRITER_API_KEY`)
## Wrappers
### LLM
There exists an Writer LLM wrapper, which you can access with
```python
from langchain.llms import Writer
```

11
docs/examples/agents.rst Normal file
View File

@ -0,0 +1,11 @@
Agents
======
The examples here are all end-to-end agents for specific applications.
.. toctree::
:maxdepth: 1
:glob:
:caption: Agents
agents/*

View File

@ -28,7 +28,7 @@
"\n",
"The first way to create a custom agent is to use an existing Agent class, but use a custom LLMChain. This is the simplest way to create a custom Agent. It is highly reccomended that you work with the `ZeroShotAgent`, as at the moment that is by far the most generalizable one. \n",
"\n",
"Most of the work in creating the custom LLMChain comes down to the prompt. Because we are using an existing agent class to parse the output, it is very important that the prompt say to produce text in that format. Additionally, we currently require an `agent_scratchpad` input variable to put notes on previous actions and observations. This should almost always be the final part of the prompt. However, besides those instructions, you can customize the prompt as you wish.\n",
"Most of the work in creating the custom LLMChain comes down to the prompt. Because we are using an existing agent class to parse the output, it is very important that the prompt say to produce text in that format. However, besides those instructions, you can customize the prompt as you wish.\n",
"\n",
"To ensure that the prompt contains the appropriate instructions, we will utilize a helper method on that class. The helper method for the `ZeroShotAgent` takes the following arguments:\n",
"\n",
@ -42,23 +42,23 @@
},
{
"cell_type": "code",
"execution_count": 23,
"execution_count": 1,
"id": "9af9734e",
"metadata": {},
"outputs": [],
"source": [
"from langchain.agents import ZeroShotAgent, Tool, AgentExecutor\n",
"from langchain import OpenAI, SerpAPIWrapper, LLMChain"
"from langchain.agents import ZeroShotAgent, Tool\n",
"from langchain import OpenAI, SerpAPIChain, LLMChain"
]
},
{
"cell_type": "code",
"execution_count": 24,
"execution_count": 2,
"id": "becda2a1",
"metadata": {},
"outputs": [],
"source": [
"search = SerpAPIWrapper()\n",
"search = SerpAPIChain()\n",
"tools = [\n",
" Tool(\n",
" name = \"Search\",\n",
@ -70,7 +70,7 @@
},
{
"cell_type": "code",
"execution_count": 25,
"execution_count": 3,
"id": "339b1bb8",
"metadata": {},
"outputs": [],
@ -78,14 +78,13 @@
"prefix = \"\"\"Answer the following questions as best you can, but speaking as a pirate might speak. You have access to the following tools:\"\"\"\n",
"suffix = \"\"\"Begin! Remember to speak as a pirate when giving your final answer. Use lots of \"Args\"\n",
"\n",
"Question: {input}\n",
"{agent_scratchpad}\"\"\"\n",
"Question: {input}\"\"\"\n",
"\n",
"prompt = ZeroShotAgent.create_prompt(\n",
" tools, \n",
" prefix=prefix, \n",
" suffix=suffix, \n",
" input_variables=[\"input\", \"agent_scratchpad\"]\n",
" input_variables=[\"input\"]\n",
")"
]
},
@ -99,7 +98,7 @@
},
{
"cell_type": "code",
"execution_count": 26,
"execution_count": 4,
"id": "e21d2098",
"metadata": {},
"outputs": [
@ -124,8 +123,7 @@
"\n",
"Begin! Remember to speak as a pirate when giving your final answer. Use lots of \"Args\"\n",
"\n",
"Question: {input}\n",
"{agent_scratchpad}\n"
"Question: {input}\n"
]
}
],
@ -133,19 +131,9 @@
"print(prompt.template)"
]
},
{
"cell_type": "markdown",
"id": "5e028e6d",
"metadata": {},
"source": [
"Note that we are able to feed agents a self-defined prompt template, i.e. not restricted to the prompt generated by the `create_prompt` function, assuming it meets the agent's requirements. \n",
"\n",
"For example, for `ZeroShotAgent`, we will need to ensure that it meets the following requirements. There should a string starting with \"Action:\" and a following string starting with \"Action Input:\", and both should be separated by a newline.\n"
]
},
{
"cell_type": "code",
"execution_count": 27,
"execution_count": 5,
"id": "9b1cc2a2",
"metadata": {},
"outputs": [],
@ -155,28 +143,17 @@
},
{
"cell_type": "code",
"execution_count": 28,
"execution_count": 6,
"id": "e4f5092f",
"metadata": {},
"outputs": [],
"source": [
"tool_names = [tool.name for tool in tools]\n",
"agent = ZeroShotAgent(llm_chain=llm_chain, allowed_tools=tool_names)"
"agent = ZeroShotAgent(llm_chain=llm_chain, tools=tools, verbose=True)"
]
},
{
"cell_type": "code",
"execution_count": 29,
"id": "490604e9",
"metadata": {},
"outputs": [],
"source": [
"agent_executor = AgentExecutor.from_agent_and_tools(agent=agent, tools=tools, verbose=True)"
]
},
{
"cell_type": "code",
"execution_count": 31,
"execution_count": 7,
"id": "653b1617",
"metadata": {},
"outputs": [
@ -186,128 +163,30 @@
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3mThought: I need to find out the population of Canada\n",
"\u001b[1m> Entering new chain...\u001b[0m\n",
"How many people live in canada?\n",
"Thought:\u001b[32;1m\u001b[1;3m I should look this up\n",
"Action: Search\n",
"Action Input: Population of Canada 2023\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mThe current population of Canada is 38,610,447 as of Saturday, February 18, 2023, based on Worldometer elaboration of the latest United Nations data. Canada 2020 population is estimated at 37,742,154 people at mid year according to UN data.\u001b[0m\n",
"Action Input: How many people live in canada\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mThe current population of Canada is 38,533,678 as of Friday, November 25, 2022, based on Worldometer elaboration of the latest United Nations data. · Canada 2020 ...\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I now know the final answer\n",
"Final Answer: Arrr, Canada be havin' 38,610,447 scallywags livin' there as of 2023!\u001b[0m\n",
"\n",
"Final Answer: Arrr, there be 38,533,678 people in Canada\u001b[0m\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"\"Arrr, Canada be havin' 38,610,447 scallywags livin' there as of 2023!\""
"'Arrr, there be 38,533,678 people in Canada'"
]
},
"execution_count": 31,
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"agent_executor.run(\"How many people live in canada as of 2023?\")"
]
},
{
"cell_type": "markdown",
"id": "040eb343",
"metadata": {},
"source": [
"### Multiple inputs\n",
"Agents can also work with prompts that require multiple inputs."
]
},
{
"cell_type": "code",
"execution_count": 32,
"id": "43dbfa2f",
"metadata": {},
"outputs": [],
"source": [
"prefix = \"\"\"Answer the following questions as best you can. You have access to the following tools:\"\"\"\n",
"suffix = \"\"\"When answering, you MUST speak in the following language: {language}.\n",
"\n",
"Question: {input}\n",
"{agent_scratchpad}\"\"\"\n",
"\n",
"prompt = ZeroShotAgent.create_prompt(\n",
" tools, \n",
" prefix=prefix, \n",
" suffix=suffix, \n",
" input_variables=[\"input\", \"language\", \"agent_scratchpad\"]\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 33,
"id": "0f087313",
"metadata": {},
"outputs": [],
"source": [
"llm_chain = LLMChain(llm=OpenAI(temperature=0), prompt=prompt)"
]
},
{
"cell_type": "code",
"execution_count": 34,
"id": "92c75a10",
"metadata": {},
"outputs": [],
"source": [
"agent = ZeroShotAgent(llm_chain=llm_chain, tools=tools)"
]
},
{
"cell_type": "code",
"execution_count": 35,
"id": "ac5b83bf",
"metadata": {},
"outputs": [],
"source": [
"agent_executor = AgentExecutor.from_agent_and_tools(agent=agent, tools=tools, verbose=True)"
]
},
{
"cell_type": "code",
"execution_count": 36,
"id": "c960e4ff",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3mThought: I need to find out the population of Canada in 2023.\n",
"Action: Search\n",
"Action Input: Population of Canada in 2023\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mThe current population of Canada is 38,610,447 as of Saturday, February 18, 2023, based on Worldometer elaboration of the latest United Nations data. Canada 2020 population is estimated at 37,742,154 people at mid year according to UN data.\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I now know the final answer.\n",
"Final Answer: La popolazione del Canada nel 2023 è stimata in 38.610.447 persone.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'La popolazione del Canada nel 2023 è stimata in 38.610.447 persone.'"
]
},
"execution_count": 36,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"agent_executor.run(input=\"How many people live in canada as of 2023?\", language=\"italian\")"
"agent.run(\"How many people live in canada?\")"
]
},
{
@ -345,12 +224,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
},
"vscode": {
"interpreter": {
"hash": "18784188d7ecd866c0586ac068b02361a6896dc3a29b64f5cc957f09c590acef"
}
"version": "3.7.6"
}
},
"nbformat": 4,

View File

@ -26,7 +26,7 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain import LLMMathChain, OpenAI, SerpAPIWrapper, SQLDatabase, SQLDatabaseChain\n",
"from langchain import LLMMathChain, OpenAI, SerpAPIChain, SQLDatabase, SQLDatabaseChain\n",
"from langchain.agents import initialize_agent, Tool"
]
},
@ -38,15 +38,15 @@
"outputs": [],
"source": [
"llm = OpenAI(temperature=0)\n",
"search = SerpAPIWrapper()\n",
"search = SerpAPIChain()\n",
"llm_math_chain = LLMMathChain(llm=llm, verbose=True)\n",
"db = SQLDatabase.from_uri(\"sqlite:///../../../../notebooks/Chinook.db\")\n",
"db = SQLDatabase.from_uri(\"sqlite:///../../../notebooks/Chinook.db\")\n",
"db_chain = SQLDatabaseChain(llm=llm, database=db, verbose=True)\n",
"tools = [\n",
" Tool(\n",
" name = \"Search\",\n",
" func=search.run,\n",
" description=\"useful for when you need to answer questions about current events. You should ask targeted questions\"\n",
" description=\"useful for when you need to answer questions about current events\"\n",
" ),\n",
" Tool(\n",
" name=\"Calculator\",\n",
@ -56,7 +56,7 @@
" Tool(\n",
" name=\"FooBar DB\",\n",
" func=db_chain.run,\n",
" description=\"useful for when you need to answer questions about FooBar. Input should be in the form of a question containing full context\"\n",
" description=\"useful for when you need to answer questions about FooBar. Input should be in the form of a question\"\n",
" )\n",
"]"
]
@ -81,44 +81,40 @@
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m I need to find out who Leo DiCaprio's girlfriend is and then calculate her age raised to the 0.43 power.\n",
"What is the age of Olivia Wilde's boyfriend raised to the 0.23 power?\n",
"Thought:\u001b[32;1m\u001b[1;3m I need to find the age of Olivia Wilde's boyfriend\n",
"Action: Search\n",
"Action Input: \"Who is Leo DiCaprio's girlfriend?\"\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mCamila Morrone\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I need to find out Camila Morrone's age\n",
"Action Input: \"Olivia Wilde's boyfriend\"\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mOlivia Wilde started dating Harry Styles after ending her years-long engagement to Jason Sudeikis — see their relationship timeline.\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I need to find the age of Harry Styles\n",
"Action: Search\n",
"Action Input: \"How old is Camila Morrone?\"\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3m25 years\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I need to calculate 25 raised to the 0.43 power\n",
"Action Input: \"Harry Styles age\"\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3m28 years\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I need to calculate 28 to the 0.23 power\n",
"Action: Calculator\n",
"Action Input: 25^0.43\u001b[0m\n",
"Action Input: 28^0.23\u001b[0m\n",
"\n",
"\u001b[1m> Entering new chain...\u001b[0m\n",
"28^0.23\u001b[32;1m\u001b[1;3m\n",
"\n",
"\u001b[1m> Entering new LLMMathChain chain...\u001b[0m\n",
"25^0.43\u001b[32;1m\u001b[1;3m\n",
"```python\n",
"import math\n",
"print(math.pow(25, 0.43))\n",
"print(28**0.23)\n",
"```\n",
"\u001b[0m\n",
"Answer: \u001b[33;1m\u001b[1;3m3.991298452658078\n",
"Answer: \u001b[33;1m\u001b[1;3m2.1520202182226886\n",
"\u001b[0m\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
"\n",
"Observation: \u001b[33;1m\u001b[1;3mAnswer: 3.991298452658078\n",
"Observation: \u001b[33;1m\u001b[1;3mAnswer: 2.1520202182226886\n",
"\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I now know the final answer\n",
"Final Answer: Camila Morrone is 25 years old and her age raised to the 0.43 power is 3.991298452658078.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
"Final Answer: 2.1520202182226886\u001b[0m"
]
},
{
"data": {
"text/plain": [
"'Camila Morrone is 25 years old and her age raised to the 0.43 power is 3.991298452658078.'"
"'2.1520202182226886'"
]
},
"execution_count": 4,
@ -127,7 +123,7 @@
}
],
"source": [
"mrkl.run(\"Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?\")"
"mrkl.run(\"What is the age of Olivia Wilde's boyfriend raised to the 0.23 power?\")"
]
},
{
@ -140,35 +136,43 @@
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m I need to find out the artist's full name and then search the FooBar database for their albums.\n",
"Who recently released an album called 'The Storm Before the Calm' and are they in the FooBar database? If so, what albums of theirs are in the FooBar database?\n",
"Thought:\u001b[32;1m\u001b[1;3m I need to find an album called 'The Storm Before the Calm'\n",
"Action: Search\n",
"Action Input: \"The Storm Before the Calm\" artist\u001b[0m\n",
"Action Input: \"The Storm Before the Calm album\"\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mThe Storm Before the Calm (stylized in all lowercase) is the tenth (and eighth international) studio album by Canadian-American singer-songwriter Alanis ...\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I now need to search the FooBar database for Alanis Morissette's albums\n",
"Thought:\u001b[32;1m\u001b[1;3m I need to check if Alanis is in the FooBar database\n",
"Action: FooBar DB\n",
"Action Input: What albums by Alanis Morissette are in the FooBar database?\u001b[0m\n",
"Action Input: \"Does Alanis Morissette exist in the FooBar database?\"\u001b[0m\n",
"\n",
"\u001b[1m> Entering new SQLDatabaseChain chain...\u001b[0m\n",
"What albums by Alanis Morissette are in the FooBar database? \n",
"SQLQuery:\u001b[32;1m\u001b[1;3m SELECT Title FROM Album INNER JOIN Artist ON Album.ArtistId = Artist.ArtistId WHERE Artist.Name = 'Alanis Morissette' LIMIT 5;\u001b[0m\n",
"SQLResult: \u001b[33;1m\u001b[1;3m[('Jagged Little Pill',)]\u001b[0m\n",
"Answer:\u001b[32;1m\u001b[1;3m The albums by Alanis Morissette in the FooBar database are Jagged Little Pill.\u001b[0m\n",
"\u001b[1m> Entering new chain...\u001b[0m\n",
"Does Alanis Morissette exist in the FooBar database?\n",
"SQLQuery:\u001b[32;1m\u001b[1;3m SELECT * FROM Artist WHERE Name = 'Alanis Morissette'\u001b[0m\n",
"SQLResult: \u001b[33;1m\u001b[1;3m[(4, 'Alanis Morissette')]\u001b[0m\n",
"Answer:\u001b[32;1m\u001b[1;3m Yes\u001b[0m\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
"\n",
"Observation: \u001b[38;5;200m\u001b[1;3m The albums by Alanis Morissette in the FooBar database are Jagged Little Pill.\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I now know the final answer\n",
"Final Answer: The artist who released the album The Storm Before the Calm is Alanis Morissette and the albums of theirs in the FooBar database are Jagged Little Pill.\u001b[0m\n",
"Observation: \u001b[38;5;200m\u001b[1;3m Yes\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I need to find out what albums of Alanis's are in the FooBar database\n",
"Action: FooBar DB\n",
"Action Input: \"What albums by Alanis Morissette are in the FooBar database?\"\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
"\u001b[1m> Entering new chain...\u001b[0m\n",
"What albums by Alanis Morissette are in the FooBar database?\n",
"SQLQuery:\u001b[32;1m\u001b[1;3m SELECT Album.Title FROM Album JOIN Artist ON Album.ArtistId = Artist.ArtistId WHERE Artist.Name = 'Alanis Morissette'\u001b[0m\n",
"SQLResult: \u001b[33;1m\u001b[1;3m[('Jagged Little Pill',)]\u001b[0m\n",
"Answer:\u001b[32;1m\u001b[1;3m Jagged Little Pill\u001b[0m\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
"\n",
"Observation: \u001b[38;5;200m\u001b[1;3m Jagged Little Pill\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I now know the final answer\n",
"Final Answer: The album is by Alanis Morissette and the albums in the FooBar database by her are Jagged Little Pill\u001b[0m"
]
},
{
"data": {
"text/plain": [
"'The artist who released the album The Storm Before the Calm is Alanis Morissette and the albums of theirs in the FooBar database are Jagged Little Pill.'"
"'The album is by Alanis Morissette and the albums in the FooBar database by her are Jagged Little Pill'"
]
},
"execution_count": 5,
@ -177,13 +181,13 @@
}
],
"source": [
"mrkl.run(\"What is the full name of the artist who recently released an album called 'The Storm Before the Calm' and are they in the FooBar database? If so, what albums of theirs are in the FooBar database?\")"
"mrkl.run(\"Who recently released an album called 'The Storm Before the Calm' and are they in the FooBar database? If so, what albums of theirs are in the FooBar database?\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "af016a70",
"id": "d7c2e6ac",
"metadata": {},
"outputs": [],
"source": []
@ -205,7 +209,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
"version": "3.7.6"
}
},
"nbformat": 4,

View File

@ -2,7 +2,7 @@
import time
from langchain.chains.natbot.base import NatBotChain
from langchain.chains.natbot.crawler import Crawler
from langchain.chains.natbot.crawler import Crawler # type: ignore
def run_cmd(cmd: str, _crawler: Crawler) -> None:
@ -33,6 +33,7 @@ def run_cmd(cmd: str, _crawler: Crawler) -> None:
if __name__ == "__main__":
objective = "Make a reservation for 2 at 7pm at bistro vida in menlo park"
print("\nWelcome to natbot! What is your objective?")
i = input()

View File

@ -0,0 +1,89 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "82140df0",
"metadata": {},
"source": [
"# ReAct\n",
"\n",
"This notebook showcases using an agent to implement the ReAct logic."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "4e272b47",
"metadata": {},
"outputs": [],
"source": [
"from langchain import OpenAI, Wikipedia\n",
"from langchain.agents import initialize_agent, Tool\n",
"from langchain.agents.react.base import DocstoreExplorer\n",
"docstore=DocstoreExplorer(Wikipedia())\n",
"tools = [\n",
" Tool(\n",
" name=\"Search\",\n",
" func=docstore.search\n",
" ),\n",
" Tool(\n",
" name=\"Lookup\",\n",
" func=docstore.lookup\n",
" )\n",
"]\n",
"\n",
"llm = OpenAI(temperature=0)\n",
"react = initialize_agent(tools, llm, agent=\"react-docstore\", verbose=True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "8078c8f1",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Author David Chanoff has collaborated with a U.S. Navy admiral who served as the ambassador to the United Kingdom under which President?\n",
"Thought 1:"
]
}
],
"source": [
"question = \"Author David Chanoff has collaborated with a U.S. Navy admiral who served as the ambassador to the United Kingdom under which President?\"\n",
"react.run(question)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "4ff64e81",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.6"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@ -12,7 +12,7 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 1,
"id": "7e3b513e",
"metadata": {},
"outputs": [
@ -20,16 +20,13 @@
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m Yes.\n",
"What is the hometown of the reigning men's U.S. Open champion?\n",
"Are follow up questions needed here:\u001b[32;1m\u001b[1;3m Yes.\n",
"Follow up: Who is the reigning men's U.S. Open champion?\u001b[0m\n",
"Intermediate answer: \u001b[36;1m\u001b[1;3mCarlos Alcaraz won the 2022 Men's single title while Poland's Iga Swiatek won the Women's single title defeating Tunisian's Ons Jabeur.\u001b[0m\n",
"Intermediate answer: \u001b[36;1m\u001b[1;3mCarlos Alcaraz\u001b[0m\n",
"\u001b[32;1m\u001b[1;3mFollow up: Where is Carlos Alcaraz from?\u001b[0m\n",
"Intermediate answer: \u001b[36;1m\u001b[1;3mEl Palmar, Spain\u001b[0m\n",
"\u001b[32;1m\u001b[1;3mSo the final answer is: El Palmar, Spain\u001b[0m\n",
"\u001b[1m> Finished AgentExecutor chain.\u001b[0m\n"
"\u001b[32;1m\u001b[1;3mSo the final answer is: El Palmar, Spain\u001b[0m"
]
},
{
@ -38,17 +35,17 @@
"'El Palmar, Spain'"
]
},
"execution_count": 2,
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain import OpenAI, SerpAPIWrapper\n",
"from langchain import OpenAI, SerpAPIChain\n",
"from langchain.agents import initialize_agent, Tool\n",
"\n",
"llm = OpenAI(temperature=0)\n",
"search = SerpAPIWrapper()\n",
"search = SerpAPIChain()\n",
"tools = [\n",
" Tool(\n",
" name=\"Intermediate Answer\",\n",
@ -57,13 +54,22 @@
"]\n",
"\n",
"self_ask_with_search = initialize_agent(tools, llm, agent=\"self-ask-with-search\", verbose=True)\n",
"\n",
"self_ask_with_search.run(\"What is the hometown of the reigning men's U.S. Open champion?\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "683d69e7",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3.9.0 64-bit ('llm-env')",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
@ -77,12 +83,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.0"
},
"vscode": {
"interpreter": {
"hash": "b1677b440931f40d89ef8be7bf03acb108ce003de0ac9b18e8d43753ea2e7103"
}
"version": "3.7.6"
}
},
"nbformat": 4,

11
docs/examples/chains.rst Normal file
View File

@ -0,0 +1,11 @@
Chains
======
The examples here are all end-to-end chains for specific applications.
.. toctree::
:maxdepth: 1
:glob:
:caption: Chains
chains/*

View File

@ -0,0 +1,89 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "d8a5c5d4",
"metadata": {},
"source": [
"# LLM Chain\n",
"\n",
"This notebook showcases a simple LLM chain."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "51a54c4d",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new chain...\u001b[0m\n",
"Prompt after formatting:\n",
"\u001b[32;1m\u001b[1;3mQuestion: What NFL team won the Super Bowl in the year Justin Beiber was born?\n",
"\n",
"Answer: Let's think step by step.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"' The year Justin Beiber was born was 1994. In 1994, the Dallas Cowboys won the Super Bowl.'"
]
},
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain import PromptTemplate, OpenAI, LLMChain\n",
"\n",
"template = \"\"\"Question: {question}\n",
"\n",
"Answer: Let's think step by step.\"\"\"\n",
"prompt = PromptTemplate(template=template, input_variables=[\"question\"])\n",
"llm_chain = LLMChain(prompt=prompt, llm=OpenAI(temperature=0), verbose=True)\n",
"\n",
"question = \"What NFL team won the Super Bowl in the year Justin Beiber was born?\"\n",
"\n",
"llm_chain.run(question)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "03dd6918",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.6"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@ -0,0 +1,91 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "e71e720f",
"metadata": {},
"source": [
"# LLM Math\n",
"\n",
"This notebook showcases using LLMs and Python REPLs to do complex word math problems."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "44e9ba31",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new chain...\u001b[0m\n",
"How many of the integers between 0 and 99 inclusive are divisible by 8?\u001b[102m\n",
"\n",
"```python\n",
"count = 0\n",
"for i in range(100):\n",
" if i % 8 == 0:\n",
" count += 1\n",
"print(count)\n",
"```\n",
"\u001b[0m\n",
"Answer: \u001b[103m13\n",
"\u001b[0m\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'Answer: 13\\n'"
]
},
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain import OpenAI, LLMMathChain\n",
"\n",
"llm = OpenAI(temperature=0)\n",
"llm_math = LLMMathChain(llm=llm, verbose=True)\n",
"\n",
"llm_math.run(\"How many of the integers between 0 and 99 inclusive are divisible by 8?\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f62f0c75",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.4"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@ -0,0 +1,93 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "d9a0131f",
"metadata": {},
"source": [
"# Map Reduce\n",
"\n",
"This notebok showcases an example of map-reduce chains: recursive summarization."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "e9db25f3",
"metadata": {},
"outputs": [],
"source": [
"from langchain import OpenAI, PromptTemplate, LLMChain\n",
"from langchain.text_splitter import CharacterTextSplitter\n",
"from langchain.chains.mapreduce import MapReduceChain\n",
"\n",
"llm = OpenAI(temperature=0)\n",
"\n",
"_prompt = \"\"\"Write a concise summary of the following:\n",
"\n",
"\n",
"{text}\n",
"\n",
"\n",
"CONCISE SUMMARY:\"\"\"\n",
"prompt = PromptTemplate(template=_prompt, input_variables=[\"text\"])\n",
"\n",
"text_splitter = CharacterTextSplitter()\n",
"\n",
"mp_chain = MapReduceChain.from_params(llm, prompt, text_splitter)"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "99bbe19b",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"\"\\n\\nThe President discusses the recent aggression by Russia, and the response by the United States and its allies. He announces new sanctions against Russia, and says that the free world is united in holding Putin accountable. The President also discusses the American Rescue Plan, the Bipartisan Infrastructure Law, and the Bipartisan Innovation Act. Finally, the President addresses the need for women's rights and equality for LGBTQ+ Americans.\""
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"with open('../state_of_the_union.txt') as f:\n",
" state_of_the_union = f.read()\n",
"mp_chain.run(state_of_the_union)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b581501e",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.7"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@ -0,0 +1,129 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "0ed6aab1",
"metadata": {
"pycharm": {
"name": "#%% md\n"
}
},
"source": [
"# SQLite example\n",
"\n",
"This example showcases hooking up an LLM to answer questions over a database."
]
},
{
"cell_type": "markdown",
"id": "b2f66479",
"metadata": {
"pycharm": {
"name": "#%% md\n"
}
},
"source": [
"This uses the example Chinook database.\n",
"To set it up follow the instructions on https://database.guide/2-sample-databases-sqlite/, placing the `.db` file in a notebooks folder at the root of this repository."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "d0e27d88",
"metadata": {
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [],
"source": [
"from langchain import OpenAI, SQLDatabase, SQLDatabaseChain"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "72ede462",
"metadata": {
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [],
"source": [
"db = SQLDatabase.from_uri(\"sqlite:///../../../notebooks/Chinook.db\")\n",
"llm = OpenAI(temperature=0)\n",
"db_chain = SQLDatabaseChain(llm=llm, database=db, verbose=True)"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "15ff81df",
"metadata": {
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new chain...\u001b[0m\n",
"How many employees are there?\n",
"SQLQuery:\u001b[102m SELECT COUNT(*) FROM Employee\u001b[0m\n",
"SQLResult: \u001b[103m[(8,)]\u001b[0m\n",
"Answer:\u001b[102m 8\u001b[0m\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"' 8'"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"db_chain.run(\"How many employees are there?\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "61d91b85",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.6"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@ -2,80 +2,79 @@
"cells": [
{
"cell_type": "markdown",
"id": "683953b3",
"id": "07c1e3b9",
"metadata": {},
"source": [
"# Qdrant\n",
"# Vector DB Question/Answering\n",
"\n",
"This notebook shows how to use functionality related to the Qdrant vector database."
"This example showcases question answering over a vector database."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "aac9563e",
"id": "82525493",
"metadata": {},
"outputs": [],
"source": [
"from langchain.embeddings.openai import OpenAIEmbeddings\n",
"from langchain.vectorstores.faiss import FAISS\n",
"from langchain.text_splitter import CharacterTextSplitter\n",
"from langchain.vectorstores import Qdrant\n",
"from langchain.document_loaders import TextLoader"
"from langchain import OpenAI, VectorDBQA"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "a3c3999a",
"execution_count": 3,
"id": "5c7049db",
"metadata": {},
"outputs": [],
"source": [
"from langchain.document_loaders import TextLoader\n",
"loader = TextLoader('../../state_of_the_union.txt')\n",
"documents = loader.load()\n",
"with open('../state_of_the_union.txt') as f:\n",
" state_of_the_union = f.read()\n",
"text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\n",
"docs = text_splitter.split_documents(documents)\n",
"texts = text_splitter.split_text(state_of_the_union)\n",
"\n",
"embeddings = OpenAIEmbeddings()"
"embeddings = OpenAIEmbeddings()\n",
"docsearch = FAISS.from_texts(texts, embeddings)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "dcf88bdf",
"execution_count": 4,
"id": "3018f865",
"metadata": {},
"outputs": [],
"source": [
"host = \"<---host name here --->\"\n",
"api_key = \"<---api key here--->\"\n",
"qdrant = Qdrant.from_documents(docs, embeddings, host=host, prefer_grpc=True, api_key=api_key)\n",
"query = \"What did the president say about Ketanji Brown Jackson\""
"qa = VectorDBQA(llm=OpenAI(), vectorstore=docsearch)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a8c513ab",
"execution_count": 5,
"id": "032a47f8",
"metadata": {},
"outputs": [],
"outputs": [
{
"data": {
"text/plain": [
"' The President said that Ketanji Brown Jackson is a consensus builder and has received a broad range of support since she was nominated.'"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"docs = qdrant.similarity_search(query)"
"query = \"What did the president say about Ketanji Brown Jackson\"\n",
"qa.run(query)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "fc516993",
"metadata": {},
"outputs": [],
"source": [
"docs[0]"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a359ed74",
"id": "f0f20b92",
"metadata": {},
"outputs": [],
"source": []
@ -97,7 +96,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
"version": "3.7.6"
}
},
"nbformat": 4,

View File

@ -0,0 +1,10 @@
Integrations
============
The examples here all highlight a specific type of integration.
.. toctree::
:maxdepth: 1
:glob:
integrations/*

View File

@ -0,0 +1,177 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "7ef4d402-6662-4a26-b612-35b542066487",
"metadata": {
"pycharm": {
"name": "#%% md\n"
}
},
"source": [
"# Embeddings & VectorStores\n",
"\n",
"This notebook show cases how to use embeddings to create a VectorStore"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "965eecee",
"metadata": {
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [],
"source": [
"from langchain.embeddings.openai import OpenAIEmbeddings\n",
"from langchain.text_splitter import CharacterTextSplitter\n",
"from langchain.vectorstores.elastic_vector_search import ElasticVectorSearch\n",
"from langchain.vectorstores.faiss import FAISS"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "68481687",
"metadata": {
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [],
"source": [
"with open('../state_of_the_union.txt') as f:\n",
" state_of_the_union = f.read()\n",
"text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\n",
"texts = text_splitter.split_text(state_of_the_union)\n",
"\n",
"embeddings = OpenAIEmbeddings()"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "015f4ff5",
"metadata": {
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [],
"source": [
"docsearch = FAISS.from_texts(texts, embeddings)\n",
"\n",
"query = \"What did the president say about Ketanji Brown Jackson\"\n",
"docs = docsearch.similarity_search(query)"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "67baf32e",
"metadata": {
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Tonight, Id like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n",
"\n",
"One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n",
"\n",
"And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nations top legal minds, who will continue Justice Breyers legacy of excellence. \n",
"\n",
"A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since shes been nominated, shes received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \n",
"\n",
"And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. \n"
]
}
],
"source": [
"print(docs[0].page_content)"
]
},
{
"cell_type": "markdown",
"id": "eea6e627",
"metadata": {},
"source": [
"## Requires having ElasticSearch setup"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "4906b8a3",
"metadata": {
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [],
"source": [
"docsearch = ElasticVectorSearch.from_texts(texts, embeddings, elasticsearch_url=\"http://localhost:9200\")\n",
"\n",
"query = \"What did the president say about Ketanji Brown Jackson\"\n",
"docs = docsearch.similarity_search(query)"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "95f9eee9",
"metadata": {
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Tonight, Id like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n",
"\n",
"One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n",
"\n",
"And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nations top legal minds, who will continue Justice Breyers legacy of excellence. \n",
"\n",
"A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since shes been nominated, shes received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \n",
"\n",
"And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. \n"
]
}
],
"source": [
"print(docs[0].page_content)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.7"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@ -5,14 +5,14 @@
"id": "959300d4",
"metadata": {},
"source": [
"# Hugging Face Hub\n",
"# HuggingFace Hub\n",
"\n",
"This example showcases how to connect to the Hugging Face Hub."
"This example showcases how to connect to the HuggingFace Hub."
]
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 1,
"id": "3acf0069",
"metadata": {},
"outputs": [
@ -20,7 +20,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
"The Seattle Seahawks won the Super Bowl in 2010. Justin Beiber was born in 2010. The final answer: Seattle Seahawks.\n"
"The Seattle Seahawks won the Super Bowl in 2010. Justin Beiber was born in 2010. The\n"
]
}
],
@ -31,7 +31,7 @@
"\n",
"Answer: Let's think step by step.\"\"\"\n",
"prompt = PromptTemplate(template=template, input_variables=[\"question\"])\n",
"llm_chain = LLMChain(prompt=prompt, llm=HuggingFaceHub(repo_id=\"google/flan-t5-xl\", model_kwargs={\"temperature\":0, \"max_length\":64}))\n",
"llm_chain = LLMChain(prompt=prompt, llm=HuggingFaceHub(repo_id=\"google/flan-t5-xl\", model_kwargs={\"temperature\":1e-10}))\n",
"\n",
"question = \"What NFL team won the Super Bowl in the year Justin Beiber was born?\"\n",
"\n",
@ -63,7 +63,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.9"
"version": "3.8.7"
}
},
"nbformat": 4,

View File

@ -0,0 +1,180 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "b118c9dc",
"metadata": {},
"source": [
"# HuggingFace Tokenizers\n",
"\n",
"This notebook show cases how to use HuggingFace tokenizers to split text."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "e82c4685",
"metadata": {},
"outputs": [],
"source": [
"from langchain.text_splitter import CharacterTextSplitter"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "a8ce51d5",
"metadata": {},
"outputs": [],
"source": [
"from transformers import GPT2TokenizerFast\n",
"\n",
"tokenizer = GPT2TokenizerFast.from_pretrained(\"gpt2\")"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "ca5e72c0",
"metadata": {},
"outputs": [],
"source": [
"with open('../state_of_the_union.txt') as f:\n",
" state_of_the_union = f.read()\n",
"text_splitter = CharacterTextSplitter.from_huggingface_tokenizer(tokenizer, chunk_size=1000, chunk_overlap=0)\n",
"texts = text_splitter.split_text(state_of_the_union)"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "37cdfbeb",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. \n",
"\n",
"Last year COVID-19 kept us apart. This year we are finally together again. \n",
"\n",
"Tonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. \n",
"\n",
"With a duty to one another to the American people to the Constitution. \n",
"\n",
"And with an unwavering resolve that freedom will always triumph over tyranny. \n",
"\n",
"Six days ago, Russias Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he badly miscalculated. \n",
"\n",
"He thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. \n",
"\n",
"He met the Ukrainian people. \n",
"\n",
"From President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world. \n",
"\n",
"Groups of citizens blocking tanks with their bodies. Everyone from students to retirees teachers turned soldiers defending their homeland. \n",
"\n",
"In this struggle as President Zelenskyy said in his speech to the European Parliament “Light will win over darkness.” The Ukrainian Ambassador to the United States is here tonight. \n",
"\n",
"Let each of us here tonight in this Chamber send an unmistakable signal to Ukraine and to the world. \n",
"\n",
"Please rise if you are able and show that, Yes, we the United States of America stand with the Ukrainian people. \n",
"\n",
"Throughout our history weve learned this lesson when dictators do not pay a price for their aggression they cause more chaos. \n",
"\n",
"They keep moving. \n",
"\n",
"And the costs and the threats to America and the world keep rising. \n",
"\n",
"Thats why the NATO Alliance was created to secure peace and stability in Europe after World War 2. \n",
"\n",
"The United States is a member along with 29 other nations. \n",
"\n",
"It matters. American diplomacy matters. American resolve matters. \n",
"\n",
"Putins latest attack on Ukraine was premeditated and unprovoked. \n",
"\n",
"He rejected repeated efforts at diplomacy. \n",
"\n",
"He thought the West and NATO wouldnt respond. And he thought he could divide us at home. Putin was wrong. We were ready. Here is what we did. \n",
"\n",
"We prepared extensively and carefully. \n",
"\n",
"We spent months building a coalition of other freedom-loving nations from Europe and the Americas to Asia and Africa to confront Putin. \n",
"\n",
"I spent countless hours unifying our European allies. We shared with the world in advance what we knew Putin was planning and precisely how he would try to falsely justify his aggression. \n",
"\n",
"We countered Russias lies with truth. \n",
"\n",
"And now that he has acted the free world is holding him accountable. \n",
"\n",
"Along with twenty-seven members of the European Union including France, Germany, Italy, as well as countries like the United Kingdom, Canada, Japan, Korea, Australia, New Zealand, and many others, even Switzerland. \n",
"\n",
"We are inflicting pain on Russia and supporting the people of Ukraine. Putin is now isolated from the world more than ever. \n",
"\n",
"Together with our allies we are right now enforcing powerful economic sanctions. \n",
"\n",
"We are cutting off Russias largest banks from the international financial system. \n",
"\n",
"Preventing Russias central bank from defending the Russian Ruble making Putins $630 Billion “war fund” worthless. \n",
"\n",
"We are choking off Russias access to technology that will sap its economic strength and weaken its military for years to come. \n",
"\n",
"Tonight I say to the Russian oligarchs and corrupt leaders who have bilked billions of dollars off this violent regime no more. \n",
"\n",
"The U.S. Department of Justice is assembling a dedicated task force to go after the crimes of Russian oligarchs. \n",
"\n",
"We are joining with our European allies to find and seize your yachts your luxury apartments your private jets. We are coming for your ill-begotten gains. \n",
"\n",
"And tonight I am announcing that we will join our allies in closing off American air space to all Russian flights further isolating Russia and adding an additional squeeze on their economy. The Ruble has lost 30% of its value. \n",
"\n",
"The Russian stock market has lost 40% of its value and trading remains suspended. Russias economy is reeling and Putin alone is to blame. \n",
"\n",
"Together with our allies we are providing support to the Ukrainians in their fight for freedom. Military assistance. Economic assistance. Humanitarian assistance. \n",
"\n",
"We are giving more than $1 Billion in direct assistance to Ukraine. \n",
"\n",
"And we will continue to aid the Ukrainian people as they defend their country and to help ease their suffering. \n",
"\n",
"Let me be clear, our forces are not engaged and will not engage in conflict with Russian forces in Ukraine. \n",
"\n",
"Our forces are not going to Europe to fight in Ukraine, but to defend our NATO Allies in the event that Putin decides to keep moving west. \n"
]
}
],
"source": [
"print(texts[0])"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d214aec2",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.6"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

11
docs/examples/memory.rst Normal file
View File

@ -0,0 +1,11 @@
Memory
======
The examples here are all related to working with the concept of Memory in LangChain.
.. toctree::
:maxdepth: 1
:glob:
:caption: Memory
memory/*

View File

@ -76,7 +76,7 @@
"text": [
"\n",
"\n",
"\u001b[1m> Entering new LLMChain chain...\u001b[0m\n",
"\u001b[1m> Entering new chain...\u001b[0m\n",
"Prompt after formatting:\n",
"\u001b[32;1m\u001b[1;3mYou are a chatbot having a conversation with a human.\n",
"\n",
@ -84,13 +84,13 @@
"Human: Hi there my friend\n",
"Chatbot:\u001b[0m\n",
"\n",
"\u001b[1m> Finished LLMChain chain.\u001b[0m\n"
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"' Hi there, how are you doing today?'"
"' Hi there!'"
]
},
"execution_count": 4,
@ -114,23 +114,23 @@
"text": [
"\n",
"\n",
"\u001b[1m> Entering new LLMChain chain...\u001b[0m\n",
"\u001b[1m> Entering new chain...\u001b[0m\n",
"Prompt after formatting:\n",
"\u001b[32;1m\u001b[1;3mYou are a chatbot having a conversation with a human.\n",
"\n",
"\n",
"Human: Hi there my friend\n",
"AI: Hi there, how are you doing today?\n",
"AI: Hi there!\n",
"Human: Not to bad - how are you?\n",
"Chatbot:\u001b[0m\n",
"\n",
"\u001b[1m> Finished LLMChain chain.\u001b[0m\n"
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"\" I'm doing great, thank you for asking!\""
"\"\\n\\nI'm doing well, thanks for asking. How about you?\""
]
},
"execution_count": 5,
@ -167,7 +167,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.9"
"version": "3.7.6"
}
},
"nbformat": 4,

View File

@ -0,0 +1,325 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "fa6802ac",
"metadata": {},
"source": [
"# Adding Memory to an Agent\n",
"\n",
"This notebook goes over adding memory to an Agent. Before going through this notebook, please walkthrough the following notebooks, as this will build on top of both of them:\n",
"\n",
"- [Adding memory to an LLM Chain](adding_memory.ipynb)\n",
"- [Custom Agents](../agents/custom_agent.ipynb)\n",
"\n",
"In order to add a memory to an agent we are going to the the following steps:\n",
"\n",
"1. We are going to create an LLMChain with memory.\n",
"2. We are going to use that LLMChain to create a custom Agent.\n",
"\n",
"For the purposes of this exercise, we are going to create a simple custom Agent that has access to a search tool and utilizes the `ConversationBufferMemory` class."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "8db95912",
"metadata": {},
"outputs": [],
"source": [
"from langchain.agents import ZeroShotAgent, Tool\n",
"from langchain.chains.conversation.memory import ConversationBufferMemory\n",
"from langchain import OpenAI, SerpAPIChain, LLMChain"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "97ad8467",
"metadata": {},
"outputs": [],
"source": [
"search = SerpAPIChain()\n",
"tools = [\n",
" Tool(\n",
" name = \"Search\",\n",
" func=search.run,\n",
" description=\"useful for when you need to answer questions about current events\"\n",
" )\n",
"]"
]
},
{
"cell_type": "markdown",
"id": "4ad2e708",
"metadata": {},
"source": [
"Notice the usage of the `chat_history` variable in the PromptTemplate, which matches up with the dynamic key name in the ConversationBufferMemory."
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "e3439cd6",
"metadata": {},
"outputs": [],
"source": [
"prefix = \"\"\"Have a conversation with a human, answering the following questions as best you can. You have access to the following tools:\"\"\"\n",
"suffix = \"\"\"Begin!\"\n",
"\n",
"{chat_history}\n",
"Question: {input}\"\"\"\n",
"\n",
"prompt = ZeroShotAgent.create_prompt(\n",
" tools, \n",
" prefix=prefix, \n",
" suffix=suffix, \n",
" input_variables=[\"input\", \"chat_history\"]\n",
")\n",
"memory = ConversationBufferMemory(memory_key=\"chat_history\")"
]
},
{
"cell_type": "markdown",
"id": "0021675b",
"metadata": {},
"source": [
"We can now construct the LLMChain, with the Memory object, and then create the agent."
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "c56a0e73",
"metadata": {},
"outputs": [],
"source": [
"llm_chain = LLMChain(llm=OpenAI(temperature=0), prompt=prompt, memory=memory)\n",
"agent = ZeroShotAgent(llm_chain=llm_chain, tools=tools, verbose=True)"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "ca4bc1fb",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new chain...\u001b[0m\n",
"How many people live in canada?\n",
"Thought:\u001b[32;1m\u001b[1;3m I should look up how many people live in canada\n",
"Action: Search\n",
"Action Input: \"How many people live in canada?\"\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mThe current population of Canada is 38,533,678 as of Friday, November 25, 2022, based on Worldometer elaboration of the latest United Nations data. · Canada 2020 ...\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I now know the final answer\n",
"Final Answer: The current population of Canada is 38,533,678 as of Friday, November 25, 2022, based on Worldometer elaboration of the latest United Nations data.\u001b[0m\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'The current population of Canada is 38,533,678 as of Friday, November 25, 2022, based on Worldometer elaboration of the latest United Nations data.'"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"agent.run(\"How many people live in canada?\")"
]
},
{
"cell_type": "markdown",
"id": "45627664",
"metadata": {},
"source": [
"To test the memory of this agent, we can ask a followup question that relies on information in the previous exchange to be answered correctly."
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "eecc0462",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new chain...\u001b[0m\n",
"what is their national anthem called?\n",
"Thought:\u001b[32;1m\u001b[1;3m\n",
"AI: I should look up the name of Canada's national anthem\n",
"Action: Search\n",
"Action Input: \"What is the name of Canada's national anthem?\"\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mAfter 100 years of tradition, O Canada was proclaimed Canada's national anthem in 1980. The music for O Canada was composed in 1880 by Calixa ...\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m\n",
"AI: I now know the final answer\n",
"Final Answer: After 100 years of tradition, O Canada was proclaimed Canada's national anthem in 1980. The music for O Canada was composed in 1880 by Calixa Lavallée.\u001b[0m\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"\"After 100 years of tradition, O Canada was proclaimed Canada's national anthem in 1980. The music for O Canada was composed in 1880 by Calixa Lavallée.\""
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"agent.run(\"what is their national anthem called?\")"
]
},
{
"cell_type": "markdown",
"id": "cc3d0aa4",
"metadata": {},
"source": [
"We can see that the agent remembered that the previous question was about Canada, and properly asked Google Search what the name of Canada's national anthem was.\n",
"\n",
"For fun, let's compare this to an agent that does NOT have memory."
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "3359d043",
"metadata": {},
"outputs": [],
"source": [
"prefix = \"\"\"Have a conversation with a human, answering the following questions as best you can. You have access to the following tools:\"\"\"\n",
"suffix = \"\"\"Begin!\"\n",
"\n",
"Question: {input}\"\"\"\n",
"\n",
"prompt = ZeroShotAgent.create_prompt(\n",
" tools, \n",
" prefix=prefix, \n",
" suffix=suffix, \n",
" input_variables=[\"input\"]\n",
")\n",
"llm_chain = LLMChain(llm=OpenAI(temperature=0), prompt=prompt)\n",
"agent_without_memory = ZeroShotAgent(llm_chain=llm_chain, tools=tools, verbose=True)"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "970d23df",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new chain...\u001b[0m\n",
"How many people live in canada?\n",
"Thought:\u001b[32;1m\u001b[1;3m I should look up how many people live in canada\n",
"Action: Search\n",
"Action Input: \"How many people live in canada?\"\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mThe current population of Canada is 38,533,678 as of Friday, November 25, 2022, based on Worldometer elaboration of the latest United Nations data. · Canada 2020 ...\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I now know the final answer\n",
"Final Answer: The current population of Canada is 38,533,678\u001b[0m\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'The current population of Canada is 38,533,678'"
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"agent_without_memory.run(\"How many people live in canada?\")"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "d9ea82f0",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new chain...\u001b[0m\n",
"what is their national anthem called?\n",
"Thought:\u001b[32;1m\u001b[1;3m I should probably look this up\n",
"Action: Search\n",
"Action Input: \"What is the national anthem of [country]\"\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mMost nation states have an anthem, defined as \"a song, as of praise, devotion, or patriotism\"; most anthems are either marches or hymns in style.\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I now know the final answer\n",
"Final Answer: The national anthem is called \"the national anthem.\"\u001b[0m\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'The national anthem is called \"the national anthem.\"'"
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"agent_without_memory.run(\"what is their national anthem called?\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "5b1f9223",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.6"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@ -44,8 +44,8 @@
},
{
"cell_type": "code",
"execution_count": null,
"id": "48a5dd13",
"execution_count": 2,
"id": "12bbed4e",
"metadata": {},
"outputs": [],
"source": [
@ -55,7 +55,7 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 3,
"id": "ff065f58",
"metadata": {},
"outputs": [],
@ -66,7 +66,7 @@
},
{
"cell_type": "code",
"execution_count": 9,
"execution_count": 8,
"id": "1d45d429",
"metadata": {},
"outputs": [],
@ -78,9 +78,6 @@
" entities: dict = {}\n",
" # Define key to pass information about entities into prompt.\n",
" memory_key: str = \"entities\"\n",
" \n",
" def clear(self):\n",
" self.entities = {}\n",
"\n",
" @property\n",
" def memory_variables(self) -> List[str]:\n",
@ -120,7 +117,7 @@
},
{
"cell_type": "code",
"execution_count": 10,
"execution_count": 9,
"id": "c05159b6",
"metadata": {},
"outputs": [],
@ -150,7 +147,7 @@
},
{
"cell_type": "code",
"execution_count": 11,
"execution_count": 10,
"id": "f08dc8ed",
"metadata": {},
"outputs": [],
@ -169,7 +166,7 @@
},
{
"cell_type": "code",
"execution_count": 12,
"execution_count": 11,
"id": "5b96e836",
"metadata": {},
"outputs": [
@ -179,7 +176,7 @@
"text": [
"\n",
"\n",
"\u001b[1m> Entering new ConversationChain chain...\u001b[0m\n",
"\u001b[1m> Entering new chain...\u001b[0m\n",
"Prompt after formatting:\n",
"\u001b[32;1m\u001b[1;3mThe following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. You are provided with information about entities the Human mentions, if relevant.\n",
"\n",
@ -190,16 +187,16 @@
"Human: Harrison likes machine learning\n",
"AI:\u001b[0m\n",
"\n",
"\u001b[1m> Finished ConversationChain chain.\u001b[0m\n"
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"\" That's great to hear! Machine learning is a fascinating field of study. It involves using algorithms to analyze data and make predictions. Have you ever studied machine learning, Harrison?\""
"\"\\n\\nThat's really interesting! I'm sure he has a lot of fun with it.\""
]
},
"execution_count": 12,
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
@ -218,7 +215,7 @@
},
{
"cell_type": "code",
"execution_count": 13,
"execution_count": 12,
"id": "4bca7070",
"metadata": {},
"outputs": [
@ -228,7 +225,7 @@
"text": [
"\n",
"\n",
"\u001b[1m> Entering new ConversationChain chain...\u001b[0m\n",
"\u001b[1m> Entering new chain...\u001b[0m\n",
"Prompt after formatting:\n",
"\u001b[32;1m\u001b[1;3mThe following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. You are provided with information about entities the Human mentions, if relevant.\n",
"\n",
@ -239,16 +236,16 @@
"Human: What do you think Harrison's favorite subject in college was?\n",
"AI:\u001b[0m\n",
"\n",
"\u001b[1m> Finished ConversationChain chain.\u001b[0m\n"
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"' From what I know about Harrison, I believe his favorite subject in college was machine learning. He has expressed a strong interest in the subject and has mentioned it often.'"
"\" Harrison's favorite subject in college was machine learning.\""
]
},
"execution_count": 13,
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
@ -290,7 +287,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.9"
"version": "3.7.6"
}
},
"nbformat": 4,

View File

@ -5,11 +5,9 @@
"id": "920a3c1a",
"metadata": {},
"source": [
"# Model Comparison\n",
"# Model Laboratory\n",
"\n",
"Constructing your language model application will likely involved choosing between many different options of prompts, models, and even chains to use. When doing so, you will want to compare these different options on different inputs in an easy, flexible, and intuitive way. \n",
"\n",
"LangChain provides the concept of a ModelLaboratory to test out and try different models."
"This example goes over basic functionality of how to use the ModelLaboratory to test out and try different models."
]
},
{
@ -137,14 +135,14 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain import SelfAskWithSearchChain, SerpAPIWrapper\n",
"from langchain import SelfAskWithSearchChain, SerpAPIChain\n",
"\n",
"open_ai_llm = OpenAI(temperature=0)\n",
"search = SerpAPIWrapper()\n",
"search = SerpAPIChain()\n",
"self_ask_with_search_openai = SelfAskWithSearchChain(llm=open_ai_llm, search_chain=search, verbose=True)\n",
"\n",
"cohere_llm = Cohere(temperature=0, model=\"command-xlarge-20221108\")\n",
"search = SerpAPIWrapper()\n",
"search = SerpAPIChain()\n",
"self_ask_with_search_cohere = SelfAskWithSearchChain(llm=cohere_llm, search_chain=search, verbose=True)"
]
},
@ -248,7 +246,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.9"
"version": "3.8.7"
}
},
"nbformat": 4,

10
docs/examples/prompts.rst Normal file
View File

@ -0,0 +1,10 @@
Prompts
=======
The examples here all highlight how to work with prompts.
.. toctree::
:maxdepth: 1
:glob:
prompts/*

View File

@ -0,0 +1,176 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "f897c784",
"metadata": {},
"source": [
"# Custom ExampleSelector\n",
"\n",
"This notebook goes over how to implement a custom ExampleSelector. ExampleSelectors are used to select examples to use in few shot prompts.\n",
"\n",
"An ExampleSelector must implement two methods:\n",
"\n",
"1. An `add_example` method which takes in an example and adds it into the ExampleSelector\n",
"2. A `select_examples` method which takes in input variables (which are meant to be user input) and returns a list of examples to use in the few shot prompt.\n",
"\n",
"\n",
"Let's implement a custom ExampleSelector that just selects two examples at random."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "1a945da1",
"metadata": {},
"outputs": [],
"source": [
"from langchain.prompts.example_selector.base import BaseExampleSelector\n",
"from typing import Dict, List\n",
"import numpy as np"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "62cf0ad7",
"metadata": {},
"outputs": [],
"source": [
"class CustomExampleSelector(BaseExampleSelector):\n",
" \n",
" def __init__(self, examples: List[Dict[str, str]]):\n",
" self.examples = examples\n",
" \n",
" def add_example(self, example: Dict[str, str]) -> None:\n",
" \"\"\"Add new example to store for a key.\"\"\"\n",
" self.examples.append(example)\n",
"\n",
" def select_examples(self, input_variables: Dict[str, str]) -> List[dict]:\n",
" \"\"\"Select which examples to use based on the inputs.\"\"\"\n",
" return np.random.choice(self.examples, size=2, replace=False)"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "242d3213",
"metadata": {},
"outputs": [],
"source": [
"examples = [{\"foo\": \"1\"}, {\"foo\": \"2\"}, {\"foo\": \"3\"}]\n",
"example_selector = CustomExampleSelector(examples)"
]
},
{
"cell_type": "markdown",
"id": "2a038065",
"metadata": {},
"source": [
"Let's now try it out! We can select some examples and try adding examples."
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "74fbbef5",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"array([{'foo': '2'}, {'foo': '3'}], dtype=object)"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"example_selector.select_examples({\"foo\": \"foo\"})"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "9bbb5421",
"metadata": {},
"outputs": [],
"source": [
"example_selector.add_example({\"foo\": \"4\"})"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "c0eb9f22",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[{'foo': '1'}, {'foo': '2'}, {'foo': '3'}, {'foo': '4'}]"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"example_selector.examples"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "cc39b1e3",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"array([{'foo': '1'}, {'foo': '4'}], dtype=object)"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"example_selector.select_examples({\"foo\": \"foo\"})"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1739dd96",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.6"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@ -11,7 +11,7 @@
"\n",
"There is only one required thing that a custom LLM needs to implement:\n",
"\n",
"1. A `_call` method that takes in a string, some optional stop words, and returns a string\n",
"1. A `__call__` method that takes in a string, some optional stop words, and returns a string\n",
"\n",
"There is a second optional thing it can implement:\n",
"\n",
@ -40,13 +40,10 @@
"source": [
"class CustomLLM(LLM):\n",
" \n",
" n: int\n",
" \n",
" @property\n",
" def _llm_type(self) -> str:\n",
" return \"custom\"\n",
" def __init__(self, n: int):\n",
" self.n = n\n",
" \n",
" def _call(self, prompt: str, stop: Optional[List[str]] = None) -> str:\n",
" def __call__(self, prompt: str, stop: Optional[List[str]] = None) -> str:\n",
" if stop is not None:\n",
" raise ValueError(\"stop kwargs are not permitted.\")\n",
" return prompt[:self.n]\n",
@ -148,7 +145,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.9"
"version": "3.7.6"
}
},
"nbformat": 4,

View File

@ -0,0 +1,116 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "a37d9694",
"metadata": {},
"source": [
"# Custom Prompt Template\n",
"\n",
"This notebook goes over how to create a custom prompt template, in case you want to create your own methodology for creating prompts.\n",
"\n",
"The only two requirements for all prompt templates are:\n",
"\n",
"1. They have a `input_variables` attribute that exposes what input variables this prompt template expects.\n",
"2. They expose a `format` method which takes in keyword arguments corresponding to the expected `input_variables` and returns the formatted prompt.\n",
"\n",
"Let's imagine that we want to create a prompt template that takes in input variables and formats them into the template AFTER capitalizing them. "
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "26f796e5",
"metadata": {},
"outputs": [],
"source": [
"from langchain.prompts import BasePromptTemplate\n",
"from pydantic import BaseModel"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "27919e96",
"metadata": {},
"outputs": [],
"source": [
"class CustomPromptTemplate(BasePromptTemplate, BaseModel):\n",
" template: str\n",
" \n",
" def format(self, **kwargs) -> str:\n",
" capitalized_kwargs = {k: v.upper() for k, v in kwargs.items()}\n",
" return self.template.format(**capitalized_kwargs)\n",
" "
]
},
{
"cell_type": "markdown",
"id": "76d1d84d",
"metadata": {},
"source": [
"We can now see that when we use this, the input variables get formatted."
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "eed1ff28",
"metadata": {},
"outputs": [],
"source": [
"prompt = CustomPromptTemplate(input_variables=[\"foo\"], template=\"Capitalized: {foo}\")"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "94892a3c",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'Capitalized: LOWERCASE'"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"prompt.format(foo=\"lowercase\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d3d9a7c7",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.6"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@ -0,0 +1,306 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "f8b01b97",
"metadata": {},
"source": [
"# Few Shot Prompt examples\n",
"Notebook showing off how canonical prompts in LangChain can be recreated as FewShotPrompts"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "18c67cc9",
"metadata": {},
"outputs": [],
"source": [
"from langchain.prompts.few_shot import FewShotPromptTemplate\n",
"from langchain.prompts.prompt import PromptTemplate"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "2a729c9f",
"metadata": {},
"outputs": [],
"source": [
"# Self Ask with Search\n",
"\n",
"examples = [\n",
" {\n",
" \"question\": \"Who lived longer, Muhammad Ali or Alan Turing?\",\n",
" \"answer\": \"Are follow up questions needed here: Yes.\\nFollow up: How old was Muhammad Ali when he died?\\nIntermediate answer: Muhammad Ali was 74 years old when he died.\\nFollow up: How old was Alan Turing when he died?\\nIntermediate answer: Alan Turing was 41 years old when he died.\\nSo the final answer is: Muhammad Ali\"\n",
" },\n",
" {\n",
" \"question\": \"When was the founder of craigslist born?\",\n",
" \"answer\": \"Are follow up questions needed here: Yes.\\nFollow up: Who was the founder of craigslist?\\nIntermediate answer: Craigslist was founded by Craig Newmark.\\nFollow up: When was Craig Newmark born?\\nIntermediate answer: Craig Newmark was born on December 6, 1952.\\nSo the final answer is: December 6, 1952\"\n",
" },\n",
" {\n",
" \"question\": \"Who was the maternal grandfather of George Washington?\",\n",
" \"answer\": \"Are follow up questions needed here: Yes.\\nFollow up: Who was the mother of George Washington?\\nIntermediate answer: The mother of George Washington was Mary Ball Washington.\\nFollow up: Who was the father of Mary Ball Washington?\\nIntermediate answer: The father of Mary Ball Washington was Joseph Ball.\\nSo the final answer is: Joseph Ball\"\n",
" },\n",
" {\n",
" \"question\": \"Are both the directors of Jaws and Casino Royale from the same country?\",\n",
" \"answer\": \"Are follow up questions needed here: Yes.\\nFollow up: Who is the director of Jaws?\\nIntermediate Answer: The director of Jaws is Steven Spielberg.\\nFollow up: Where is Steven Spielberg from?\\nIntermediate Answer: The United States.\\nFollow up: Who is the director of Casino Royale?\\nIntermediate Answer: The director of Casino Royale is Martin Campbell.\\nFollow up: Where is Martin Campbell from?\\nIntermediate Answer: New Zealand.\\nSo the final answer is: No\"\n",
" }\n",
"]\n",
"example_prompt = PromptTemplate(input_variables=[\"question\", \"answer\"], template=\"Question: {question}\\n{answer}\")\n",
"\n",
"prompt = FewShotPromptTemplate(\n",
" examples=examples, \n",
" example_prompt=example_prompt, \n",
" suffix=\"Question: {input}\", \n",
" input_variables=[\"input\"]\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "95fc0059",
"metadata": {},
"outputs": [],
"source": [
"# ReAct\n",
"\n",
"examples = [\n",
" {\n",
" \"question\": \"What is the elevation range for the area that the eastern sector of the Colorado orogeny extends into?\",\n",
" \"answer\": \"Thought 1: I need to search Colorado orogeny, find the area that the eastern sector of the Colorado orogeny extends into, then find the elevation range of that area.\\nAction 1: Search[Colorado orogeny]\\nObservation 1: The Colorado orogeny was an episode of mountain building (an orogeny) in Colorado and surrounding areas.\\nThought 2: It does not mention the eastern sector. So I need to look up eastern sector.\\nAction 2: Lookup[eastern sector]\\nObservation 2: (Result 1 / 1) The eastern sector extends into the High Plains and is called the Central Plains orogeny.\\nThought 3: The eastern sector of Colorado orogeny extends into the High Plains. So I need to search High Plains and find its elevation range.\\nAction 3: Search[High Plains]\\nObservation 3: High Plains refers to one of two distinct land regions\\nThought 4: I need to instead search High Plains (United States).\\nAction 4: Search[High Plains (United States)]\\nObservation 4: The High Plains are a subregion of the Great Plains. From east to west, the High Plains rise in elevation from around 1,800 to 7,000 ft (550 to 2,130 m).[3]\\nThought 5: High Plains rise in elevation from around 1,800 to 7,000 ft, so the answer is 1,800 to 7,000 ft.\\nAction 5: Finish[1,800 to 7,000 ft]\"\n",
" },\n",
" {\n",
" \"question\": \"Musician and satirist Allie Goertz wrote a song about the \\\"The Simpsons\\\" character Milhouse, who Matt Groening named after who?\",\n",
" \"answer\": \"Thought 1: The question simplifies to \\\"The Simpsons\\\" character Milhouse is named after who. I only need to search Milhouse and find who it is named after.\\nAction 1: Search[Milhouse]\\nObservation 1: Milhouse Mussolini Van Houten is a recurring character in the Fox animated television series The Simpsons voiced by Pamela Hayden and created by Matt Groening.\\nThought 2: The paragraph does not tell who Milhouse is named after, maybe I can look up \\\"named after\\\".\\nAction 2: Lookup[named after]\\nObservation 2: (Result 1 / 1) Milhouse was named after U.S. president Richard Nixon, whose middle name was Milhous.\\nThought 3: Milhouse was named after U.S. president Richard Nixon, so the answer is Richard Nixon.\\nAction 3: Finish[Richard Nixon]\"\n",
" },\n",
" {\n",
" \"question\": \"Which documentary is about Finnish rock groups, Adam Clayton Powell or The Saimaa Gesture?\",\n",
" \"answer\": \"Thought 1: I need to search Adam Clayton Powell and The Saimaa Gesture, and find which documentary is about Finnish rock groups.\\nAction 1: Search[Adam Clayton Powell]\\nObservation 1 Could not find [Adam Clayton Powell]. Similar: [Adam Clayton Powell III, Seventh Avenue (Manhattan), Adam Clayton Powell Jr. State Office Building, Isabel Washington Powell, Adam Powell, Adam Clayton Powell (film), Giancarlo Esposito].\\nThought 2: To find the documentary, I can search Adam Clayton Powell (film).\\nAction 2: Search[Adam Clayton Powell (film)]\\nObservation 2: Adam Clayton Powell is a 1989 American documentary film directed by Richard Kilberg. The film is about the rise and fall of influential African-American politician Adam Clayton Powell Jr.[3][4] It was later aired as part of the PBS series The American Experience.\\nThought 3: Adam Clayton Powell (film) is a documentary about an African-American politician, not Finnish rock groups. So the documentary about Finnish rock groups must instead be The Saimaa Gesture.\\nAction 3: Finish[The Saimaa Gesture]\"\n",
" },\n",
" {\n",
" \"question\": \"What profession does Nicholas Ray and Elia Kazan have in common?\",\n",
" \"answer\": \"Thought 1: I need to search Nicholas Ray and Elia Kazan, find their professions, then find the profession they have in common.\\nAction 1: Search[Nicholas Ray]\\nObservation 1: Nicholas Ray (born Raymond Nicholas Kienzle Jr., August 7, 1911 - June 16, 1979) was an American film director, screenwriter, and actor best known for the 1955 film Rebel Without a Cause.\\nThought 2: Professions of Nicholas Ray are director, screenwriter, and actor. I need to search Elia Kazan next and find his professions.\\nAction 2: Search[Elia Kazan]\\nObservation 2: Elia Kazan was an American film and theatre director, producer, screenwriter and actor.\\nThought 3: Professions of Elia Kazan are director, producer, screenwriter, and actor. So profession Nicholas Ray and Elia Kazan have in common is director, screenwriter, and actor.\\nAction 3: Finish[director, screenwriter, actor]\"\n",
" },\n",
" {\n",
" \"question\": \"Which magazine was started first Arthurs Magazine or First for Women?\",\n",
" \"answer\": \"Thought 1: I need to search Arthurs Magazine and First for Women, and find which was started first.\\nAction 1: Search[Arthurs Magazine]\\nObservation 1: Arthurs Magazine (1844-1846) was an American literary periodical published in Philadelphia in the 19th century.\\nThought 2: Arthurs Magazine was started in 1844. I need to search First for Women next.\\nAction 2: Search[First for Women]\\nObservation 2: First for Women is a womans magazine published by Bauer Media Group in the USA.[1] The magazine was started in 1989.\\nThought 3: First for Women was started in 1989. 1844 (Arthurs Magazine) < 1989 (First for Women), so Arthurs Magazine was started first.\\nAction 3: Finish[Arthurs Magazine]\"\n",
" },\n",
" {\n",
" \"question\": \"Were Pavel Urysohn and Leonid Levin known for the same type of work?\",\n",
" \"answer\": \"Thought 1: I need to search Pavel Urysohn and Leonid Levin, find their types of work, then find if they are the same.\\nAction 1: Search[Pavel Urysohn]\\nObservation 1: Pavel Samuilovich Urysohn (February 3, 1898 - August 17, 1924) was a Soviet mathematician who is best known for his contributions in dimension theory.\\nThought 2: Pavel Urysohn is a mathematician. I need to search Leonid Levin next and find its type of work.\\nAction 2: Search[Leonid Levin]\\nObservation 2: Leonid Anatolievich Levin is a Soviet-American mathematician and computer scientist.\\nThought 3: Leonid Levin is a mathematician and computer scientist. So Pavel Urysohn and Leonid Levin have the same type of work.\\nAction 3: Finish[yes]\"\n",
" }\n",
"]\n",
"example_prompt = PromptTemplate(input_variables=[\"question\", \"answer\"], template=\"Question: {question}\\n{answer}\")\n",
"\n",
"prompt = FewShotPromptTemplate(\n",
" examples=examples, \n",
" example_prompt=example_prompt, \n",
" suffix=\"Question: {input}\", \n",
" input_variables=[\"input\"]\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "897d4e08",
"metadata": {},
"outputs": [],
"source": [
"# LLM Math\n",
"examples = [\n",
" {\n",
" \"question\": \"What is 37593 * 67?\",\n",
" \"answer\": \"```python\\nprint(37593 * 67)\\n```\\n```output\\n2518731\\n```\\nAnswer: 2518731\"\n",
" }\n",
"]\n",
"example_prompt = PromptTemplate(input_variables=[\"question\", \"answer\"], template=\"Question: {question}\\n\\n{answer}\")\n",
"\n",
"prompt = FewShotPromptTemplate(\n",
" examples=examples, \n",
" example_prompt=example_prompt, \n",
" suffix=\"Question: {input}\", \n",
" input_variables=[\"input\"]\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "7ab7379f",
"metadata": {},
"outputs": [],
"source": [
"# NatBot\n",
"example_seperator = \"==================================================\"\n",
"content_1 = \"\"\"<link id=1>About</link>\n",
"<link id=2>Store</link>\n",
"<link id=3>Gmail</link>\n",
"<link id=4>Images</link>\n",
"<link id=5>(Google apps)</link>\n",
"<link id=6>Sign in</link>\n",
"<img id=7 alt=\"(Google)\"/>\n",
"<input id=8 alt=\"Search\"></input>\n",
"<button id=9>(Search by voice)</button>\n",
"<button id=10>(Google Search)</button>\n",
"<button id=11>(I'm Feeling Lucky)</button>\n",
"<link id=12>Advertising</link>\n",
"<link id=13>Business</link>\n",
"<link id=14>How Search works</link>\n",
"<link id=15>Carbon neutral since 2007</link>\n",
"<link id=16>Privacy</link>\n",
"<link id=17>Terms</link>\n",
"<text id=18>Settings</text>\"\"\"\n",
"content_2 = \"\"\"<link id=1>About</link>\n",
"<link id=2>Store</link>\n",
"<link id=3>Gmail</link>\n",
"<link id=4>Images</link>\n",
"<link id=5>(Google apps)</link>\n",
"<link id=6>Sign in</link>\n",
"<img id=7 alt=\"(Google)\"/>\n",
"<input id=8 alt=\"Search\"></input>\n",
"<button id=9>(Search by voice)</button>\n",
"<button id=10>(Google Search)</button>\n",
"<button id=11>(I'm Feeling Lucky)</button>\n",
"<link id=12>Advertising</link>\n",
"<link id=13>Business</link>\n",
"<link id=14>How Search works</link>\n",
"<link id=15>Carbon neutral since 2007</link>\n",
"<link id=16>Privacy</link>\n",
"<link id=17>Terms</link>\n",
"<text id=18>Settings</text>\"\"\"\n",
"content_3 = \"\"\"<button id=1>For Businesses</button>\n",
"<button id=2>Mobile</button>\n",
"<button id=3>Help</button>\n",
"<button id=4 alt=\"Language Picker\">EN</button>\n",
"<link id=5>OpenTable logo</link>\n",
"<button id=6 alt =\"search\">Search</button>\n",
"<text id=7>Find your table for any occasion</text>\n",
"<button id=8>(Date selector)</button>\n",
"<text id=9>Sep 28, 2022</text>\n",
"<text id=10>7:00 PM</text>\n",
"<text id=11>2 people</text>\n",
"<input id=12 alt=\"Location, Restaurant, or Cuisine\"></input>\n",
"<button id=13>Lets go</button>\n",
"<text id=14>It looks like you're in Peninsula. Not correct?</text>\n",
"<button id=15>Get current location</button>\n",
"<button id=16>Next</button>\"\"\"\n",
"examples = [\n",
" {\n",
" \"i\": 1,\n",
" \"content\": content_1,\n",
" \"objective\": \"Find a 2 bedroom house for sale in Anchorage AK for under $750k\",\n",
" \"current_url\": \"https://www.google.com/\",\n",
" \"command\": 'TYPESUBMIT 8 \"anchorage redfin\"'\n",
" },\n",
" {\n",
" \"i\": 2,\n",
" \"content\": content_2,\n",
" \"objective\": \"Make a reservation for 4 at Dorsia at 8pm\",\n",
" \"current_url\": \"https://www.google.com/\",\n",
" \"command\": 'TYPESUBMIT 8 \"dorsia nyc opentable\"'\n",
" },\n",
" {\n",
" \"i\": 3,\n",
" \"content\": content_3,\n",
" \"objective\": \"Make a reservation for 4 for dinner at Dorsia in New York City at 8pm\",\n",
" \"current_url\": \"https://www.opentable.com/\",\n",
" \"command\": 'TYPESUBMIT 12 \"dorsia new york city\"'\n",
" },\n",
"]\n",
"example_prompt_template=\"\"\"EXAMPLE {i}:\n",
"==================================================\n",
"CURRENT BROWSER CONTENT:\n",
"------------------\n",
"{content}\n",
"------------------\n",
"OBJECTIVE: {objective}\n",
"CURRENT URL: {current_url}\n",
"YOUR COMMAND:\n",
"{command}\"\"\"\n",
"example_prompt = PromptTemplate(input_variables=[\"i\", \"content\", \"objective\", \"current_url\", \"command\"], template=example_prompt_template)\n",
"\n",
"\n",
"prefix = \"\"\"\n",
"You are an agent controlling a browser. You are given:\n",
"\t(1) an objective that you are trying to achieve\n",
"\t(2) the URL of your current web page\n",
"\t(3) a simplified text description of what's visible in the browser window (more on that below)\n",
"You can issue these commands:\n",
"\tSCROLL UP - scroll up one page\n",
"\tSCROLL DOWN - scroll down one page\n",
"\tCLICK X - click on a given element. You can only click on links, buttons, and inputs!\n",
"\tTYPE X \"TEXT\" - type the specified text into the input with id X\n",
"\tTYPESUBMIT X \"TEXT\" - same as TYPE above, except then it presses ENTER to submit the form\n",
"The format of the browser content is highly simplified; all formatting elements are stripped.\n",
"Interactive elements such as links, inputs, buttons are represented like this:\n",
"\t\t<link id=1>text</link>\n",
"\t\t<button id=2>text</button>\n",
"\t\t<input id=3>text</input>\n",
"Images are rendered as their alt text like this:\n",
"\t\t<img id=4 alt=\"\"/>\n",
"Based on your given objective, issue whatever command you believe will get you closest to achieving your goal.\n",
"You always start on Google; you should submit a search query to Google that will take you to the best page for\n",
"achieving your objective. And then interact with that page to achieve your objective.\n",
"If you find yourself on Google and there are no search results displayed yet, you should probably issue a command\n",
"like \"TYPESUBMIT 7 \"search query\"\" to get to a more useful page.\n",
"Then, if you find yourself on a Google search results page, you might issue the command \"CLICK 24\" to click\n",
"on the first link in the search results. (If your previous command was a TYPESUBMIT your next command should\n",
"probably be a CLICK.)\n",
"Don't try to interact with elements that you can't see.\n",
"Here are some examples:\n",
"\"\"\"\n",
"suffix=\"\"\"\n",
"The current browser content, objective, and current URL follow. Reply with your next command to the browser.\n",
"CURRENT BROWSER CONTENT:\n",
"------------------\n",
"{browser_content}\n",
"------------------\n",
"OBJECTIVE: {objective}\n",
"CURRENT URL: {url}\n",
"PREVIOUS COMMAND: {previous_command}\n",
"YOUR COMMAND:\n",
"\"\"\"\n",
"PROMPT = FewShotPromptTemplate(\n",
" examples = examples,\n",
" example_prompt=example_prompt,\n",
" example_separator=example_seperator,\n",
" input_variables=[\"browser_content\", \"url\", \"previous_command\", \"objective\"],\n",
" prefix=prefix,\n",
" suffix=suffix,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ce5927c6",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.6"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@ -100,14 +100,23 @@
"text/plain": [
"['',\n",
" '',\n",
" 'Question: What is the difference between the Illinois and Missouri orogeny?',\n",
" 'Thought 1: I need to search Illinois and Missouri orogeny, and find the difference between them.',\n",
" 'Action 1: Search[Illinois orogeny]',\n",
" 'Observation 1: The Illinois orogeny is a hypothesized orogenic event that occurred in the Late Paleozoic either in the Pennsylvanian or Permian period.',\n",
" 'Thought 2: The Illinois orogeny is a hypothesized orogenic event. I need to search Missouri orogeny next and find its details.',\n",
" 'Action 2: Search[Missouri orogeny]',\n",
" 'Observation 2: The Missouri orogeny was a major tectonic event that occurred in the late Pennsylvanian and early Permian period (about 300 million years ago).',\n",
" 'Thought 3: The Illinois orogeny is hypothesized and occurred in the Late Paleozoic and the Missouri orogeny was a major tectonic event that occurred in the late Pennsylvanian and early Permian period. So the difference between the Illinois and Missouri orogeny is that the Illinois orogeny is hypothesized and occurred in the Late Paleozoic while the Missouri orogeny was a major']"
" 'Question: What is the highest mountain peak in North America?',\n",
" '',\n",
" 'Thought 1: I need to search North America and find the highest mountain peak.',\n",
" '',\n",
" 'Action 1: Search[North America]',\n",
" '',\n",
" 'Observation 1: North America is a continent entirely within the Northern Hemisphere and almost all within the Western Hemisphere.',\n",
" '',\n",
" 'Thought 2: I need to look up \"highest mountain peak\".',\n",
" '',\n",
" 'Action 2: Lookup[highest mountain peak]',\n",
" '',\n",
" 'Observation 2: (Result 1 / 1) Denali, formerly Mount McKinley, is the highest mountain peak in North America, with a summit elevation of 20,310 feet (6,190 m) above sea level.',\n",
" '',\n",
" 'Thought 3: Denali is the highest mountain peak in North America, with a summit elevation of 20,310 feet.',\n",
" '',\n",
" 'Action 3: Finish[20,310 feet]']"
]
},
"execution_count": 4,
@ -144,12 +153,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.9"
},
"vscode": {
"interpreter": {
"hash": "b1677b440931f40d89ef8be7bf03acb108ce003de0ac9b18e8d43753ea2e7103"
}
"version": "3.7.6"
}
},
"nbformat": 4,

View File

@ -5,7 +5,7 @@
"id": "43fb16cb",
"metadata": {},
"source": [
"# Getting Started\n",
"# Prompt Management\n",
"\n",
"Managing your prompts is annoying and tedious, with everyone writing their own slightly different variants of the same ideas. But it shouldn't be this way. \n",
"\n",
@ -50,7 +50,7 @@
"The only two things that define a prompt are:\n",
"\n",
"1. `input_variables`: The user inputted variables that are needed to format the prompt.\n",
"2. `format`: A method which takes in keyword arguments and returns a formatted prompt. The keys are expected to be the input variables\n",
"2. `format`: A method which takes in keyword arguments are returns a formatted prompt. The keys are expected to be the input variables\n",
" \n",
"The rest of the logic of how the prompt is constructed is left up to different implementations. Let's take a look at some below."
]
@ -71,7 +71,7 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 1,
"id": "094229f4",
"metadata": {},
"outputs": [],
@ -81,7 +81,7 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 2,
"id": "ab46bd2a",
"metadata": {},
"outputs": [
@ -91,7 +91,7 @@
"'Tell me a joke.'"
]
},
"execution_count": 4,
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
@ -104,7 +104,7 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 3,
"id": "c3ad0fa8",
"metadata": {},
"outputs": [
@ -114,7 +114,7 @@
"'Tell me a funny joke.'"
]
},
"execution_count": 5,
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
@ -127,7 +127,7 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 4,
"id": "ba577dcf",
"metadata": {},
"outputs": [
@ -137,7 +137,7 @@
"'Tell me a funny joke about chickens.'"
]
},
"execution_count": 6,
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
@ -151,100 +151,6 @@
"multiple_input_prompt.format(adjective=\"funny\", content=\"chickens\")"
]
},
{
"cell_type": "markdown",
"id": "cc991ad2",
"metadata": {},
"source": [
"## From Template\n",
"You can also easily load a prompt template by just specifying the template, and not worrying about the input variables."
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "d0a0756c",
"metadata": {},
"outputs": [],
"source": [
"template = \"Tell me a {adjective} joke about {content}.\"\n",
"multiple_input_prompt = PromptTemplate.from_template(template)"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "59046640",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"PromptTemplate(input_variables=['adjective', 'content'], output_parser=None, template='Tell me a {adjective} joke about {content}.', template_format='f-string', validate_template=True)"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"multiple_input_prompt"
]
},
{
"cell_type": "markdown",
"id": "b2dd6154",
"metadata": {},
"source": [
"## Alternative formats\n",
"\n",
"This section shows how to use alternative formats besides \"f-string\" to format prompts."
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "53b41b6a",
"metadata": {},
"outputs": [],
"source": [
"# Jinja2\n",
"template = \"\"\"\n",
"{% for item in items %}\n",
"Question: {{ item.question }}\n",
"Answer: {{ item.answer }}\n",
"{% endfor %}\n",
"\"\"\"\n",
"items=[{\"question\": \"foo\", \"answer\": \"bar\"},{\"question\": \"1\", \"answer\": \"2\"}]\n",
"jinja2_prompt = PromptTemplate(\n",
" input_variables=[\"items\"], \n",
" template=template,\n",
" template_format=\"jinja2\"\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "ba8aabd3",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'\\n\\nQuestion: foo\\nAnswer: bar\\n\\nQuestion: 1\\nAnswer: 2\\n'"
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"jinja2_prompt.format(items=items)"
]
},
{
"cell_type": "markdown",
"id": "1492b49d",
@ -261,7 +167,7 @@
},
{
"cell_type": "code",
"execution_count": 11,
"execution_count": 5,
"id": "3eb36972",
"metadata": {},
"outputs": [],
@ -280,7 +186,7 @@
},
{
"cell_type": "code",
"execution_count": 12,
"execution_count": 6,
"id": "80a91d96",
"metadata": {},
"outputs": [],
@ -290,7 +196,7 @@
},
{
"cell_type": "code",
"execution_count": 13,
"execution_count": 7,
"id": "7931e5f2",
"metadata": {},
"outputs": [
@ -332,69 +238,6 @@
"print(prompt_from_string_examples.format(adjective=\"big\"))"
]
},
{
"cell_type": "markdown",
"id": "874b7575",
"metadata": {},
"source": [
"## Few Shot Prompts with Templates\n",
"We can also construct few shot prompt templates where the prefix and suffix themselves are prompt templates"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "e710115f",
"metadata": {},
"outputs": [],
"source": [
"from langchain.prompts import FewShotPromptWithTemplates"
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "5bf23a65",
"metadata": {},
"outputs": [],
"source": [
"prefix = PromptTemplate(input_variables=[\"content\"], template=\"This is a test about {content}.\")\n",
"suffix = PromptTemplate(input_variables=[\"new_content\"], template=\"Now you try to talk about {new_content}.\")\n",
"\n",
"prompt = FewShotPromptWithTemplates(\n",
" suffix=suffix,\n",
" prefix=prefix,\n",
" input_variables=[\"content\", \"new_content\"],\n",
" examples=examples,\n",
" example_prompt=example_prompt,\n",
" example_separator=\"\\n\",\n",
")\n",
"output = prompt.format(content=\"animals\", new_content=\"party\")"
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "d4036351",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"This is a test about animals.\n",
"Input: happy\n",
"Output: sad\n",
"Input: tall\n",
"Output: short\n",
"Now you try to talk about party.\n"
]
}
],
"source": [
"print(output)"
]
},
{
"cell_type": "markdown",
"id": "bf038596",
@ -428,7 +271,7 @@
},
{
"cell_type": "code",
"execution_count": 17,
"execution_count": 8,
"id": "7c469c95",
"metadata": {},
"outputs": [],
@ -438,7 +281,7 @@
},
{
"cell_type": "code",
"execution_count": 18,
"execution_count": 9,
"id": "0ec6d950",
"metadata": {},
"outputs": [],
@ -455,7 +298,7 @@
},
{
"cell_type": "code",
"execution_count": 19,
"execution_count": 10,
"id": "207e55f7",
"metadata": {},
"outputs": [],
@ -485,7 +328,7 @@
},
{
"cell_type": "code",
"execution_count": 20,
"execution_count": 11,
"id": "d00b4385",
"metadata": {},
"outputs": [
@ -522,7 +365,7 @@
},
{
"cell_type": "code",
"execution_count": 21,
"execution_count": 12,
"id": "878bcde9",
"metadata": {},
"outputs": [
@ -548,7 +391,7 @@
},
{
"cell_type": "code",
"execution_count": 22,
"execution_count": 13,
"id": "e4bebcd9",
"metadata": {},
"outputs": [
@ -600,31 +443,22 @@
},
{
"cell_type": "code",
"execution_count": 23,
"execution_count": 14,
"id": "241bfe80",
"metadata": {},
"outputs": [],
"source": [
"from langchain.prompts.example_selector import SemanticSimilarityExampleSelector\n",
"from langchain.vectorstores import Chroma\n",
"from langchain.vectorstores import FAISS\n",
"from langchain.embeddings import OpenAIEmbeddings"
]
},
{
"cell_type": "code",
"execution_count": 24,
"execution_count": 15,
"id": "50d0a701",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Running Chroma using direct local API.\n",
"Using DuckDB in-memory for database. Data will be transient.\n"
]
}
],
"outputs": [],
"source": [
"example_selector = SemanticSimilarityExampleSelector.from_examples(\n",
" # This is the list of examples available to select from.\n",
@ -632,7 +466,7 @@
" # This is the embedding class used to produce embeddings which are used to measure semantic similarity.\n",
" OpenAIEmbeddings(), \n",
" # This is the VectorStore class that is used to store the embeddings and do a similarity search over.\n",
" Chroma, \n",
" FAISS, \n",
" # This is the number of examples to produce.\n",
" k=1\n",
")\n",
@ -648,7 +482,7 @@
},
{
"cell_type": "code",
"execution_count": 25,
"execution_count": 16,
"id": "4c8fdf45",
"metadata": {},
"outputs": [
@ -673,7 +507,7 @@
},
{
"cell_type": "code",
"execution_count": 19,
"execution_count": 17,
"id": "829af21a",
"metadata": {
"scrolled": true
@ -685,8 +519,8 @@
"text": [
"Give the antonym of every input\n",
"\n",
"Input: happy\n",
"Output: sad\n",
"Input: tall\n",
"Output: short\n",
"\n",
"Input: fat\n",
"Output:\n"
@ -700,7 +534,7 @@
},
{
"cell_type": "code",
"execution_count": 20,
"execution_count": 18,
"id": "3c16fe23",
"metadata": {},
"outputs": [
@ -710,8 +544,8 @@
"text": [
"Give the antonym of every input\n",
"\n",
"Input: happy\n",
"Output: sad\n",
"Input: enthusiastic\n",
"Output: apathetic\n",
"\n",
"Input: joyful\n",
"Output:\n"
@ -724,111 +558,6 @@
"print(similar_prompt.format(adjective=\"joyful\"))"
]
},
{
"cell_type": "markdown",
"id": "bc35afd0",
"metadata": {},
"source": [
"### Maximal Marginal Relevance ExampleSelector\n",
"\n",
"The MaxMarginalRelevanceExampleSelector selects examples based on a combination of which examples are most similar to the inputs, while also optimizing for diversity. It does this by finding the examples with the embeddings that have the greatest cosine similarity with the inputs, and then iteratively adding them while penalizing them for closeness to already selected examples.\n"
]
},
{
"cell_type": "code",
"execution_count": 21,
"id": "ac95c968",
"metadata": {},
"outputs": [],
"source": [
"from langchain.prompts.example_selector import MaxMarginalRelevanceExampleSelector\n",
"from langchain.vectorstores import FAISS"
]
},
{
"cell_type": "code",
"execution_count": 22,
"id": "db579bea",
"metadata": {},
"outputs": [],
"source": [
"example_selector = MaxMarginalRelevanceExampleSelector.from_examples(\n",
" # This is the list of examples available to select from.\n",
" examples, \n",
" # This is the embedding class used to produce embeddings which are used to measure semantic similarity.\n",
" OpenAIEmbeddings(), \n",
" # This is the VectorStore class that is used to store the embeddings and do a similarity search over.\n",
" FAISS, \n",
" # This is the number of examples to produce.\n",
" k=2\n",
")\n",
"mmr_prompt = FewShotPromptTemplate(\n",
" # We provide an ExampleSelector instead of examples.\n",
" example_selector=example_selector,\n",
" example_prompt=example_prompt,\n",
" prefix=\"Give the antonym of every input\",\n",
" suffix=\"Input: {adjective}\\nOutput:\", \n",
" input_variables=[\"adjective\"],\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 23,
"id": "cd76e344",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Give the antonym of every input\n",
"\n",
"Input: happy\n",
"Output: sad\n",
"\n",
"Input: windy\n",
"Output: calm\n",
"\n",
"Input: worried\n",
"Output:\n"
]
}
],
"source": [
"# Input is a feeling, so should select the happy/sad example as the first one\n",
"print(mmr_prompt.format(adjective=\"worried\"))"
]
},
{
"cell_type": "code",
"execution_count": 24,
"id": "cf82956b",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Give the antonym of every input\n",
"\n",
"Input: happy\n",
"Output: sad\n",
"\n",
"Input: enthusiastic\n",
"Output: apathetic\n",
"\n",
"Input: worried\n",
"Output:\n"
]
}
],
"source": [
"# Let's compare this to what we would just get if we went solely off of similarity\n",
"similar_prompt.example_selector.k = 2\n",
"print(similar_prompt.format(adjective=\"worried\"))"
]
},
{
"cell_type": "markdown",
"id": "dbc32551",
@ -873,12 +602,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
},
"vscode": {
"interpreter": {
"hash": "b1677b440931f40d89ef8be7bf03acb108ce003de0ac9b18e8d43753ea2e7103"
}
"version": "3.7.6"
}
},
"nbformat": 4,

View File

@ -11,7 +11,7 @@
"\n",
"At a high level, the following design principles are applied to serialization:\n",
"\n",
"1. Both JSON and YAML are supported. We want to support serialization methods that are human readable on disk, and YAML and JSON are two of the most popular methods for that. Note that this rule applies to prompts. For other assets, like Examples, different serialization methods may be supported.\n",
"1. Both JSON and YAML are supported. We want to support serialization methods are human readable on disk, and YAML and JSON are two of the most popular methods for that. Note that this rule applies to prompts. For other assets, like Examples, different serialization methods may be supported.\n",
"\n",
"2. We support specifying everything in one file, or storing different components (templates, examples, etc) in different files and referencing them. For some cases, storing everything in file makes the most sense, but for others it is preferrable to split up some of the assets (long templates, large examples, reusable components). LangChain supports both.\n",
"\n",
@ -225,35 +225,6 @@
"!cat examples.json"
]
},
{
"cell_type": "markdown",
"id": "d3052850",
"metadata": {},
"source": [
"And here is what the same examples stored as yaml might look like."
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "901385d1",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"- input: happy\r\n",
" output: sad\r\n",
"- input: tall\r\n",
" output: short\r\n"
]
}
],
"source": [
"!cat examples.yaml"
]
},
{
"cell_type": "markdown",
"id": "8e300335",
@ -265,7 +236,7 @@
},
{
"cell_type": "code",
"execution_count": 10,
"execution_count": 9,
"id": "e2bec0fc",
"metadata": {},
"outputs": [
@ -296,7 +267,7 @@
},
{
"cell_type": "code",
"execution_count": 11,
"execution_count": 10,
"id": "98c8f356",
"metadata": {},
"outputs": [
@ -322,73 +293,6 @@
"print(prompt.format(adjective=\"funny\"))"
]
},
{
"cell_type": "markdown",
"id": "13620324",
"metadata": {},
"source": [
"The same would work if you loaded examples from the yaml file."
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "831e5e4a",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"_type: few_shot\r\n",
"input_variables:\r\n",
" [\"adjective\"]\r\n",
"prefix: \r\n",
" Write antonyms for the following words.\r\n",
"example_prompt:\r\n",
" input_variables:\r\n",
" [\"input\", \"output\"]\r\n",
" template:\r\n",
" \"Input: {input}\\nOutput: {output}\"\r\n",
"examples:\r\n",
" examples.yaml\r\n",
"suffix:\r\n",
" \"Input: {adjective}\\nOutput:\"\r\n"
]
}
],
"source": [
"!cat few_shot_prompt_yaml_examples.yaml"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "6f0a7eaa",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Write antonyms for the following words.\n",
"\n",
"Input: happy\n",
"Output: sad\n",
"\n",
"Input: tall\n",
"Output: short\n",
"\n",
"Input: funny\n",
"Output:\n"
]
}
],
"source": [
"prompt = load_prompt(\"few_shot_prompt_yaml_examples.yaml\")\n",
"print(prompt.format(adjective=\"funny\"))"
]
},
{
"cell_type": "markdown",
"id": "4870aa9d",
@ -400,7 +304,7 @@
},
{
"cell_type": "code",
"execution_count": 14,
"execution_count": 11,
"id": "9d996a86",
"metadata": {},
"outputs": [
@ -428,7 +332,7 @@
},
{
"cell_type": "code",
"execution_count": 15,
"execution_count": 12,
"id": "dd2c10bb",
"metadata": {},
"outputs": [
@ -465,7 +369,7 @@
},
{
"cell_type": "code",
"execution_count": 16,
"execution_count": 13,
"id": "6cd781ef",
"metadata": {},
"outputs": [
@ -496,7 +400,7 @@
},
{
"cell_type": "code",
"execution_count": 17,
"execution_count": 14,
"id": "533ab8a7",
"metadata": {},
"outputs": [
@ -533,7 +437,7 @@
},
{
"cell_type": "code",
"execution_count": 18,
"execution_count": 15,
"id": "0b6dd7b8",
"metadata": {},
"outputs": [
@ -554,7 +458,7 @@
},
{
"cell_type": "code",
"execution_count": 19,
"execution_count": 16,
"id": "76a1065d",
"metadata": {},
"outputs": [
@ -579,7 +483,7 @@
},
{
"cell_type": "code",
"execution_count": 20,
"execution_count": 17,
"id": "744d275d",
"metadata": {},
"outputs": [
@ -604,6 +508,14 @@
"prompt = load_prompt(\"few_shot_prompt_example_prompt.json\")\n",
"print(prompt.format(adjective=\"funny\"))"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "dcfc7176",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
@ -622,12 +534,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.9"
},
"vscode": {
"interpreter": {
"hash": "8eb71adebe840dca1185e9603533462bc47eb1b1a73bf7dab2d0a8a4c932882e"
}
"version": "3.7.6"
}
},
"nbformat": 4,

View File

@ -1,19 +1,18 @@
# Agents
Agents use an LLM to determine which actions to take and in what order.
An action can either be using a tool and observing its output, or returning a response to the user.
For a list of easily loadable tools, see [here](tools.md).
An action can either be using a tool and observing its output, or returning to the user.
Here are the agents available in LangChain.
For a tutorial on how to load agents, see [here](getting_started.ipynb).
For a tutorial on how to load agents, see [here](/getting_started/agents.ipynb).
## `zero-shot-react-description`
### `zero-shot-react-description`
This agent uses the ReAct framework to determine which tool to use
based solely on the tool's description. Any number of tools can be provided.
This agent requires that a description is provided for each tool.
## `react-docstore`
### `react-docstore`
This agent uses the ReAct framework to interact with a docstore. Two tools must
be provided: a `Search` tool and a `Lookup` tool (they must be named exactly as so).
@ -22,15 +21,9 @@ a term in the most recently found document.
This agent is equivalent to the
original [ReAct paper](https://arxiv.org/pdf/2210.03629.pdf), specifically the Wikipedia example.
## `self-ask-with-search`
### `self-ask-with-search`
This agent utilizes a single tool that should be named `Intermediate Answer`.
This tool should be able to lookup factual answers to questions. This agent
is equivalent to the original [self ask with search paper](https://ofir.io/self-ask.pdf),
where a Google search API was provided as the tool.
### `conversational-react-description`
This agent is designed to be used in conversational settings.
The prompt is designed to make the agent helpful and conversational.
It uses the ReAct framework to decide which tool to use, and uses memory to remember the previous conversation interactions.

View File

@ -0,0 +1,37 @@
# Core Concepts
This section goes over the core concepts of LangChain.
Understanding these will go a long way in helping you understand the codebase and how to construct chains.
## PromptTemplates
PromptTemplates generically have a `format` method that takes in variables and returns a formatted string.
The most simple implementation of this is to have a template string with some variables in it, and then format it with the incoming variables.
More complex iterations dynamically construct the template string from few shot examples, etc.
For a more detailed explanation of how LangChain approaches prompts and prompt templates, see [here](/examples/prompts/prompt_management).
## LLMs
Wrappers around Large Language Models (in particular, the `generate` ability of large language models) are some of the core functionality of LangChain.
These wrappers are classes that are callable: they take in an input string, and return the generated output string.
## Embeddings
These classes are very similar to the LLM classes in that they are wrappers around models,
but rather than return a string they return an embedding (list of floats). This are particularly useful when
implementing semantic search functionality. They expose separate methods for embedding queries versus embedding documents.
## Vectorstores
These are datastores that store documents. They expose a method for passing in a string and finding similar documents.
## Chains
These are pipelines that combine multiple of the above ideas.
They vary greatly in complexity and are combination of generic, highly configurable pipelines and more narrow (but usually more complex) pipelines.
## Agents
As opposed to a chain, whether the steps to be taken are known ahead of time, agents
use an LLM to determine which tools to call and in what order.
## Memory
By default, Chains and Agents are stateless, meaning that they treat each incoming query independently.
In some applications (chatbots being a GREAT example) it is highly important to remember previous interactions,
both at a short term but also at a long term level. The concept of "Memory" exists to do exactly that.

Some files were not shown because too many files have changed in this diff Show More