Compare commits

..

1 Commits

Author SHA1 Message Date
Harrison Chase 6dcbb74582 sequential chain from prompts 1 year ago

@ -0,0 +1,2 @@
[run]
omit = tests/*

@ -1,144 +0,0 @@
.vscode/
.idea/
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class
# C extensions
*.so
# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
pip-wheel-metadata/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST
# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
*.py,cover
.hypothesis/
.pytest_cache/
# Translations
*.mo
*.pot
# Django stuff:
*.log
local_settings.py
db.sqlite3
db.sqlite3-journal
# Flask stuff:
instance/
.webassets-cache
# Scrapy stuff:
.scrapy
# Sphinx documentation
docs/_build/
# PyBuilder
target/
# Jupyter Notebook
.ipynb_checkpoints
notebooks/
# IPython
profile_default/
ipython_config.py
# pyenv
.python-version
# pipenv
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
# However, in case of collaboration, if having platform-specific dependencies or dependencies
# having no cross-platform support, pipenv may install dependencies that don't work, or not
# install all needed dependencies.
#Pipfile.lock
# PEP 582; used by e.g. github.com/David-OConnor/pyflow
__pypackages__/
# Celery stuff
celerybeat-schedule
celerybeat.pid
# SageMath parsed files
*.sage.py
# Environments
.env
.venv
.venvs
env/
venv/
ENV/
env.bak/
venv.bak/
# Spyder project settings
.spyderproject
.spyproject
# Rope project settings
.ropeproject
# mkdocs documentation
/site
# mypy
.mypy_cache/
.dmypy.json
dmypy.json
# Pyre type checker
.pyre/
# macOS display setting files
.DS_Store
# docker
docker/
!docker/assets/
.dockerignore
docker.build

@ -1,36 +0,0 @@
name: linkcheck
on:
push:
branches: [master]
pull_request:
env:
POETRY_VERSION: "1.3.1"
jobs:
build:
runs-on: ubuntu-latest
strategy:
matrix:
python-version:
- "3.11"
steps:
- uses: actions/checkout@v3
- name: Install poetry
run: |
pipx install poetry==$POETRY_VERSION
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
cache: poetry
- name: Install dependencies
run: |
poetry install --with docs
- name: Build the docs
run: |
make docs_build
- name: Analyzing the docs with linkcheck
run: |
make docs_linkcheck

@ -1,49 +0,0 @@
name: release
on:
pull_request:
types:
- closed
branches:
- master
paths:
- 'pyproject.toml'
env:
POETRY_VERSION: "1.3.1"
jobs:
if_release:
if: |
${{ github.event.pull_request.merged == true }}
&& ${{ contains(github.event.pull_request.labels.*.name, 'release') }}
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Install poetry
run: pipx install poetry==$POETRY_VERSION
- name: Set up Python 3.10
uses: actions/setup-python@v4
with:
python-version: "3.10"
cache: "poetry"
- name: Build project for distribution
run: poetry build
- name: Check Version
id: check-version
run: |
echo version=$(poetry version --short) >> $GITHUB_OUTPUT
- name: Create Release
uses: ncipollo/release-action@v1
with:
artifacts: "dist/*"
token: ${{ secrets.GITHUB_TOKEN }}
draft: false
generateReleaseNotes: true
tag: v${{ steps.check-version.outputs.version }}
commit: master
- name: Publish to PyPI
env:
POETRY_PYPI_TOKEN_PYPI: ${{ secrets.PYPI_API_TOKEN }}
run: |
poetry publish

@ -31,4 +31,4 @@ jobs:
run: poetry install
- name: Run unit tests
run: |
make test
make tests

6
.gitignore vendored

@ -106,9 +106,7 @@ celerybeat.pid
# Environments
.env
!docker/.env
.venv
.venvs
env/
venv/
ENV/
@ -132,7 +130,3 @@ dmypy.json
# Pyre type checker
.pyre/
# macOS display setting files
.DS_Store
docker.build

@ -1,8 +0,0 @@
cff-version: 1.2.0
message: "If you use this software, please cite it as below."
authors:
- family-names: "Chase"
given-names: "Harrison"
title: "LangChain"
date-released: 2022-10-17
url: "https://github.com/hwchase17/langchain"

@ -47,7 +47,7 @@ good code into the codebase.
### 🏭Release process
As of now, LangChain has an ad hoc release process: releases are cut with high frequency via by
a developer and published to [PyPI](https://pypi.org/project/langchain/).
a developer and published to [PyPI](https://pypi.org/project/ruff/).
LangChain follows the [semver](https://semver.org/) versioning standard. However, as pre-1.0 software,
even patch releases may contain [non-backwards-compatible changes](https://semver.org/#spec-item-4).
@ -55,15 +55,11 @@ even patch releases may contain [non-backwards-compatible changes](https://semve
If your contribution has made its way into a release, we will want to give you credit on Twitter (only if you want though)!
If you have a Twitter account you would like us to mention, please let us know in the PR or in another manner.
## 🚀Quick Start
## 🤖Developer Setup
This project uses [Poetry](https://python-poetry.org/) as a dependency manager. Check out Poetry's [documentation on how to install it](https://python-poetry.org/docs/#installation) on your system before proceeding.
### 🚀Quick Start
❗Note: If you use `Conda` or `Pyenv` as your environment / package manager, avoid dependency conflicts by doing the following first:
1. *Before installing Poetry*, create and activate a new Conda env (e.g. `conda create -n langchain python=3.9`)
2. Install Poetry (see above)
3. Tell Poetry to use the virtualenv python environment (`poetry config virtualenvs.prefer-active-python true`)
4. Continue with the following steps.
This project uses [Poetry](https://python-poetry.org/) as a dependency manager. Check out Poetry's [documentation on how to install it](https://python-poetry.org/docs/#installation) on your system before proceeding.
To install requirements:
@ -75,11 +71,9 @@ This will install all requirements for running the package, examples, linting, f
Now, you should be able to run the common tasks in the following section.
## ✅Common Tasks
Type `make` for a list of common tasks.
### ✅Common Tasks
### Code Formatting
#### Code Formatting
Formatting for this project is done via a combination of [Black](https://black.readthedocs.io/en/stable/) and [isort](https://pycqa.github.io/isort/).
@ -89,7 +83,7 @@ To run formatting for this project:
make format
```
### Linting
#### Linting
Linting for this project is done via a combination of [Black](https://black.readthedocs.io/en/stable/), [isort](https://pycqa.github.io/isort/), [flake8](https://flake8.pycqa.org/en/latest/), and [mypy](http://mypy-lang.org/).
@ -101,7 +95,7 @@ make lint
We recognize linting can be annoying - if you do not want to do it, please contact a project maintainer, and they can help you with it. We do not want this to be a blocker for good code getting contributed.
### Coverage
#### Coverage
Code coverage (i.e. the amount of code that is covered by unit tests) helps identify areas of the code that are potentially more or less brittle.
@ -111,14 +105,14 @@ To get a report of current coverage, run the following:
make coverage
```
### Testing
#### Testing
Unit tests cover modular logic that does not require calls to outside APIs.
To run unit tests:
```bash
make test
make tests
```
If you add new logic, please add a unit test.
@ -133,7 +127,7 @@ make integration_tests
If you add support for a new external API, please add a new integration test.
### Adding a Jupyter Notebook
#### Adding a Jupyter Notebook
If you are adding a Jupyter notebook example, you'll want to install the optional `dev` dependencies.
@ -151,36 +145,10 @@ poetry run jupyter notebook
When you run `poetry install`, the `langchain` package is installed as editable in the virtualenv, so your new logic can be imported into the notebook.
## Using Docker
Refer to [DOCKER.md](docker/DOCKER.md) for more information.
## Documentation
### Contribute Documentation
#### Contribute Documentation
Docs are largely autogenerated by [sphinx](https://www.sphinx-doc.org/en/master/) from the code.
For that reason, we ask that you add good documentation to all classes and methods.
Similar to linting, we recognize documentation can be annoying. If you do not want to do it, please contact a project maintainer, and they can help you with it. We do not want this to be a blocker for good code getting contributed.
### Build Documentation Locally
Before building the documentation, it is always a good idea to clean the build directory:
```bash
make docs_clean
```
Next, you can run the linkchecker to make sure all links are valid:
```bash
make docs_linkcheck
```
Finally, you can build the documentation as outlined below:
```bash
make docs_build
```

@ -1,73 +1,23 @@
.PHONY: all clean format lint test tests test_watch integration_tests help
.PHONY: format lint tests integration_tests
GIT_HASH ?= $(shell git rev-parse --short HEAD)
LANGCHAIN_VERSION := $(shell grep '^version' pyproject.toml | cut -d '=' -f2 | tr -d '"')
all: help
coverage:
poetry run pytest --cov \
--cov-config=.coveragerc \
--cov-report xml \
--cov-report term-missing:skip-covered
clean: docs_clean
docs_build:
cd docs && poetry run make html
docs_clean:
cd docs && poetry run make clean
docs_linkcheck:
poetry run linkchecker docs/_build/html/index.html
format:
poetry run black .
poetry run ruff --select I --fix .
poetry run isort .
lint:
poetry run mypy .
poetry run black . --check
poetry run ruff .
poetry run isort . --check
poetry run flake8 .
test:
tests:
poetry run pytest tests/unit_tests
tests: test
test_watch:
poetry run ptw --now . -- tests/unit_tests
integration_tests:
poetry run pytest tests/integration_tests
help:
@echo '----'
@echo 'coverage - run unit tests and generate coverage report'
@echo 'docs_build - build the documentation'
@echo 'docs_clean - clean the documentation build artifacts'
@echo 'docs_linkcheck - run linkchecker on the documentation'
ifneq ($(shell command -v docker 2> /dev/null),)
@echo 'docker - build and run the docker dev image'
@echo 'docker.run - run the docker dev image'
@echo 'docker.jupyter - start a jupyter notebook inside container'
@echo 'docker.build - build the docker dev image'
@echo 'docker.force_build - force a rebuild'
@echo 'docker.test - run the unit tests in docker'
@echo 'docker.lint - run the linters in docker'
@echo 'docker.clean - remove the docker dev image'
endif
@echo 'format - run code formatters'
@echo 'lint - run linters'
@echo 'test - run unit tests'
@echo 'test_watch - run unit tests in watch mode'
@echo 'integration_tests - run integration tests'
# include the following makefile if the docker executable is available
ifeq ($(shell command -v docker 2> /dev/null),)
$(info Docker not found, skipping docker-related targets)
else
include docker/Makefile
endif

@ -1,15 +1,8 @@
# 🦜️🔗 LangChain - Docker
# 🦜️🔗 LangChain
WIP: This is a fork of langchain focused on implementing a docker warpper and
toolchain. The goal is to make it easy to use LLM chains running inside a
container, build custom docker based tools and let agents run arbitrary
untrusted code inside.
⚡ Building applications with LLMs through composability ⚡
Currently exploring the following:
- Docker wrapper for LLMs and chains
- Creating a toolchain for building docker based LLM tools.
- Building agents that can run arbitrary untrusted code inside a container.
[![lint](https://github.com/hwchase17/langchain/actions/workflows/lint.yml/badge.svg)](https://github.com/hwchase17/langchain/actions/workflows/lint.yml) [![test](https://github.com/hwchase17/langchain/actions/workflows/test.yml/badge.svg)](https://github.com/hwchase17/langchain/actions/workflows/test.yml) [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) [![Twitter](https://img.shields.io/twitter/url/https/twitter.com/langchainai.svg?style=social&label=Follow%20%40LangChainAI)](https://twitter.com/langchainai) [![](https://dcbadge.vercel.app/api/server/6adMQxSpJS?compact=true&style=flat)](https://discord.gg/6adMQxSpJS)
## Quick Install
@ -22,22 +15,7 @@ developers to build applications that they previously could not.
But using these LLMs in isolation is often not enough to
create a truly powerful app - the real power comes when you can combine them with other sources of computation or knowledge.
This library is aimed at assisting in the development of those types of applications. Common examples of these types of applications include:
**❓ Question Answering over specific documents**
- [Documentation](https://langchain.readthedocs.io/en/latest/use_cases/question_answering.html)
- End-to-end Example: [Question Answering over Notion Database](https://github.com/hwchase17/notion-qa)
**💬 Chatbots**
- [Documentation](https://langchain.readthedocs.io/en/latest/use_cases/chatbots.html)
- End-to-end Example: [Chat-LangChain](https://github.com/hwchase17/chat-langchain)
**🤖 Agents**
- [Documentation](https://langchain.readthedocs.io/en/latest/use_cases/agents.html)
- End-to-end Example: [GPT+WolframAlpha](https://huggingface.co/spaces/JavaFXpert/Chat-GPT-LangChain)
This library is aimed at assisting in the development of those types of applications.
## 📖 Documentation
@ -50,7 +28,7 @@ Please see [here](https://langchain.readthedocs.io/en/latest/?) for full documen
## 🚀 What can this help with?
There are six main areas that LangChain is designed to help with.
There are four main areas that LangChain is designed to help with.
These are, in increasing order of complexity:
**📃 LLMs and Prompts:**
@ -61,10 +39,6 @@ This includes prompt management, prompt optimization, generic interface for all
Chains go beyond just a single LLM call, and are sequences of calls (whether to an LLM or a different utility). LangChain provides a standard interface for chains, lots of integrations with other tools, and end-to-end chains for common applications.
**📚 Data Augmented Generation:**
Data Augmented Generation involves specific types of chains that first interact with an external datasource to fetch data to use in the generation step. Examples of this include summarization of long pieces of text and question/answering over specific data sources.
**🤖 Agents:**
Agents involve an LLM making decisions about which Actions to take, taking that Action, seeing an Observation, and repeating that until done. LangChain provides a standard interface for agents, a selection of agents to choose from, and examples of end to end agents.
@ -73,14 +47,11 @@ Agents involve an LLM making decisions about which Actions to take, taking that
Memory is the concept of persisting state between calls of a chain/agent. LangChain provides a standard interface for memory, a collection of memory implementations, and examples of chains/agents that use memory.
**🧐 Evaluation:**
[BETA] Generative models are notoriously hard to evaluate with traditional metrics. One new way of evaluating them is using language models themselves to do the evaluation. LangChain provides some prompts/chains for assisting in this.
For more information on these concepts, please see our [full documentation](https://langchain.readthedocs.io/en/latest/?).
## 💁 Contributing
As an open source project in a rapidly developing field, we are extremely open to contributions, whether it be in the form of a new feature, improved infra, or better documentation.
As an open source project in a rapidly developing field, we are extremely open
to contributions, whether it be in the form of a new feature, improved infra, or better documentation.
For detailed information on how to contribute, see [here](CONTRIBUTING.md).

@ -1,13 +0,0 @@
# python env
PYTHON_VERSION=3.10
# -E flag is required
# comment the following line to only install dev dependencies
POETRY_EXTRA_PACKAGES="-E all"
# at least one group needed
POETRY_DEPENDENCIES="dev,test,lint,typing"
# langchain env. warning: these variables will be baked into the docker image !
OPENAI_API_KEY=${OPENAI_API_KEY:-}
SERPAPI_API_KEY=${SERPAPI_API_KEY:-}

@ -1,53 +0,0 @@
# Using Docker
To quickly get started, run the command `make docker`.
If docker is installed the Makefile will export extra targets in the fomrat `docker.*` to build and run the docker image. Type `make` for a list of available tasks.
There is a basic `docker-compose.yml` in the docker directory.
## Building the development image
Using `make docker` will build the dev image if it does not exist, then drops
you inside the container with the langchain environment available in the shell.
### Customizing the image and installed dependencies
The image is built with a default python version and all extras and dev
dependencies. It can be customized by changing the variables in the [.env](/docker/.env)
file.
If you don't need all the `extra` dependencies a slimmer image can be obtained by
commenting out `POETRY_EXTRA_PACKAGES` in the [.env](docker/.env) file.
### Image caching
The Dockerfile is optimized to cache the poetry install step. A rebuild is triggered when there a change to the source code.
## Example Usage
All commands from langchain's python environment are available by default in the container.
A few examples:
```bash
# run jupyter notebook
docker run --rm -it IMG jupyter notebook
# run ipython
docker run --rm -it IMG ipython
# start web server
docker run --rm -p 8888:8888 IMG python -m http.server 8888
```
## Testing / Linting
Tests and lints are run using your local source directory that is mounted on the volume /src.
Run unit tests in the container with `make docker.test`.
Run the linting and formatting checks with `make docker.lint`.
Note: this task can run in parallel using `make -j4 docker.lint`.

@ -1,104 +0,0 @@
# vim: ft=dockerfile
#
# see also: https://github.com/python-poetry/poetry/discussions/1879
# - with https://github.com/bneijt/poetry-lock-docker
# see https://github.com/thehale/docker-python-poetry
# see https://github.com/max-pfeiffer/uvicorn-poetry
# use by default the slim version of python
ARG PYTHON_IMAGE_TAG=slim
ARG PYTHON_VERSION=${PYTHON_VERSION:-3.11.2}
####################
# Base Environment
####################
FROM python:$PYTHON_VERSION-$PYTHON_IMAGE_TAG AS lchain-base
ARG UID=1000
ARG USERNAME=lchain
ENV USERNAME=$USERNAME
RUN groupadd -g ${UID} $USERNAME
RUN useradd -l -m -u ${UID} -g ${UID} $USERNAME
# used for mounting source code
RUN mkdir /src
VOLUME /src
#######################
## Poetry Builder Image
#######################
FROM lchain-base AS lchain-base-builder
ARG POETRY_EXTRA_PACKAGES=$POETRY_EXTRA_PACKAGES
ARG POETRY_DEPENDENCIES=$POETRY_DEPENDENCIES
ENV HOME=/root
ENV POETRY_HOME=/root/.poetry
ENV POETRY_VIRTUALENVS_IN_PROJECT=false
ENV POETRY_NO_INTERACTION=1
ENV CACHE_DIR=$HOME/.cache
ENV POETRY_CACHE_DIR=$CACHE_DIR/pypoetry
ENV PATH="$POETRY_HOME/bin:$PATH"
WORKDIR /root
RUN apt-get update && \
apt-get install -y \
build-essential \
git \
curl
SHELL ["/bin/bash", "-o", "pipefail", "-c"]
RUN mkdir -p $CACHE_DIR
## setup poetry
RUN curl -sSL -o $CACHE_DIR/pypoetry-installer.py https://install.python-poetry.org/
RUN python3 $CACHE_DIR/pypoetry-installer.py
# # Copy poetry files
COPY poetry.* pyproject.toml ./
RUN mkdir /pip-prefix
RUN poetry export $POETRY_EXTRA_PACKAGES --with $POETRY_DEPENDENCIES -f requirements.txt --output requirements.txt --without-hashes && \
pip install --no-cache-dir --disable-pip-version-check --prefix /pip-prefix -r requirements.txt
# add custom motd message
COPY docker/assets/etc/motd /tmp/motd
RUN cat /tmp/motd > /etc/motd
RUN printf "\n%s\n%s\n" "$(poetry version)" "$(python --version)" >> /etc/motd
###################
## Runtime Image
###################
FROM lchain-base AS lchain
#jupyter port
EXPOSE 8888
COPY docker/assets/entry.sh /entry
RUN chmod +x /entry
COPY --from=lchain-base-builder /etc/motd /etc/motd
COPY --from=lchain-base-builder /usr/bin/git /usr/bin/git
USER ${USERNAME:-lchain}
ENV HOME /home/$USERNAME
WORKDIR /home/$USERNAME
COPY --chown=lchain:lchain --from=lchain-base-builder /pip-prefix $HOME/.local/
COPY . .
SHELL ["/bin/bash", "-o", "pipefail", "-c"]
RUN pip install --no-deps --disable-pip-version-check --no-cache-dir -e .
entrypoint ["/entry"]

@ -1,84 +0,0 @@
#do not call this makefile it is included in the main Makefile
.PHONY: docker docker.jupyter docker.run docker.force_build docker.clean \
docker.test docker.lint docker.lint.mypy docker.lint.black \
docker.lint.isort docker.lint.flake
# read python version from .env file ignoring comments
PYTHON_VERSION := $(shell grep PYTHON_VERSION docker/.env | cut -d '=' -f2)
POETRY_EXTRA_PACKAGES := $(shell grep '^[^#]*POETRY_EXTRA_PACKAGES' docker/.env | cut -d '=' -f2)
POETRY_DEPENDENCIES := $(shell grep 'POETRY_DEPENDENCIES' docker/.env | cut -d '=' -f2)
DOCKER_SRC := $(shell find docker -type f)
DOCKER_IMAGE_NAME = langchain/dev
# SRC is all files matched by the git ls-files command
SRC := $(shell git ls-files -- '*' ':!:docker/*')
# set DOCKER_BUILD_PROGRESS=plain to see detailed build progress
DOCKER_BUILD_PROGRESS ?= auto
# extra message to show when entering the docker container
DOCKER_MOTD := docker/assets/etc/motd
ROOTDIR := $(shell git rev-parse --show-toplevel)
DOCKER_LINT_CMD = docker run --rm -i -u lchain -v $(ROOTDIR):/src $(DOCKER_IMAGE_NAME):$(GIT_HASH)
docker: docker.run
docker.run: docker.build
@echo "Docker image: $(DOCKER_IMAGE_NAME):$(GIT_HASH)"
docker run --rm -it -u lchain -v $(ROOTDIR):/src $(DOCKER_IMAGE_NAME):$(GIT_HASH)
docker.jupyter: docker.build
docker run --rm -it -v $(ROOTDIR):/src $(DOCKER_IMAGE_NAME):$(GIT_HASH) jupyter notebook
docker.build: $(SRC) $(DOCKER_SRC) $(DOCKER_MOTD)
ifdef $(DOCKER_BUILDKIT)
docker buildx build --build-arg PYTHON_VERSION=$(PYTHON_VERSION) \
--build-arg POETRY_EXTRA_PACKAGES=$(POETRY_EXTRA_PACKAGES) \
--build-arg POETRY_DEPENDENCIES=$(POETRY_DEPENDENCIES) \
--progress=$(DOCKER_BUILD_PROGRESS) \
$(BUILD_FLAGS) -f docker/Dockerfile -t $(DOCKER_IMAGE_NAME):$(GIT_HASH) .
else
docker build --build-arg PYTHON_VERSION=$(PYTHON_VERSION) \
--build-arg POETRY_EXTRA_PACKAGES=$(POETRY_EXTRA_PACKAGES) \
--build-arg POETRY_DEPENDENCIES=$(POETRY_DEPENDENCIES) \
$(BUILD_FLAGS) -f docker/Dockerfile -t $(DOCKER_IMAGE_NAME):$(GIT_HASH) .
endif
docker tag $(DOCKER_IMAGE_NAME):$(GIT_HASH) $(DOCKER_IMAGE_NAME):latest
@touch $@ # this prevents docker from rebuilding dependencies that have not
@ # changed. Remove the file `docker/docker.build` to force a rebuild.
docker.force_build: $(DOCKER_SRC)
@rm -f docker.build
@$(MAKE) docker.build BUILD_FLAGS=--no-cache
docker.clean:
docker rmi $(DOCKER_IMAGE_NAME):$(GIT_HASH) $(DOCKER_IMAGE_NAME):latest
docker.test: docker.build
docker run --rm -it -u lchain -v $(ROOTDIR):/src $(DOCKER_IMAGE_NAME):$(GIT_HASH) \
pytest /src/tests/unit_tests
# this assumes that the docker image has been built
docker.lint: docker.lint.mypy docker.lint.black docker.lint.isort \
docker.lint.flake
# these can run in parallel with -j[njobs]
docker.lint.mypy:
@$(DOCKER_LINT_CMD) mypy /src
@printf "\t%s\n" "mypy ... "
docker.lint.black:
@$(DOCKER_LINT_CMD) black /src --check
@printf "\t%s\n" "black ... "
docker.lint.isort:
@$(DOCKER_LINT_CMD) isort /src --check
@printf "\t%s\n" "isort ... "
docker.lint.flake:
@$(DOCKER_LINT_CMD) flake8 /src
@printf "\t%s\n" "flake8 ... "

@ -1,10 +0,0 @@
#!/usr/bin/env bash
export PATH=$HOME/.local/bin:$PATH
if [ -z "$1" ]; then
cat /etc/motd
exec /bin/bash
fi
exec "$@"

@ -1,8 +0,0 @@
All dependencies have been installed in the current shell. There is no
virtualenv or a need for `poetry` inside the container.
Running the command `make docker.run` at the root directory of the project will
build the container the first time. On the next runs it will use the cached
image. A rebuild will happen when changes are made to the source code.
You local source directory has been mounted to the /src directory.

@ -1,17 +0,0 @@
version: "3.7"
services:
langchain:
hostname: langchain
image: langchain/dev:latest
build:
context: ../
dockerfile: docker/Dockerfile
args:
PYTHON_VERSION: ${PYTHON_VERSION}
POETRY_EXTRA_PACKAGES: ${POETRY_EXTRA_PACKAGES}
POETRY_DEPENDENCIES: ${POETRY_DEPENDENCIES}
restart: unless-stopped
ports:
- 127.0.0.1:8888:8888

@ -3,7 +3,7 @@
# You can set these variables from the command line, and also
# from the environment for the first two.
SPHINXOPTS ?=
SPHINXOPTS ?=
SPHINXBUILD ?= sphinx-build
SPHINXAUTOBUILD ?= sphinx-autobuild
SOURCEDIR = .

Binary file not shown.

Before

Width:  |  Height:  |  Size: 235 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 148 KiB

@ -1,13 +0,0 @@
pre {
white-space: break-spaces;
}
@media (min-width: 1200px) {
.container,
.container-lg,
.container-md,
.container-sm,
.container-xl {
max-width: 2560px !important;
}
}

@ -22,15 +22,13 @@ with open("../pyproject.toml") as f:
# -- Project information -----------------------------------------------------
project = "🦜🔗 LangChain"
project = "LangChain"
copyright = "2022, Harrison Chase"
author = "Harrison Chase"
version = data["tool"]["poetry"]["version"]
release = version
html_title = project + " " + version
# -- General configuration ---------------------------------------------------
@ -44,11 +42,11 @@ extensions = [
"sphinx.ext.napoleon",
"sphinx.ext.viewcode",
"sphinxcontrib.autodoc_pydantic",
"myst_nb",
"myst_parser",
"nbsphinx",
"sphinx_panels",
"IPython.sphinxext.ipython_console_highlighting",
]
source_suffix = [".ipynb", ".html", ".md", ".rst"]
autodoc_pydantic_model_show_json = False
autodoc_pydantic_field_list_validators = False
@ -75,13 +73,8 @@ exclude_patterns = ["_build", "Thumbs.db", ".DS_Store"]
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
#
html_theme = "sphinx_book_theme"
html_theme_options = {
"path_to_docs": "docs",
"repository_url": "https://github.com/hwchase17/langchain",
"use_repository_button": True,
}
html_theme = "sphinx_rtd_theme"
# html_theme = "sphinx_typlog_theme"
html_context = {
"display_github": True, # Integrate GitHub
@ -94,12 +87,4 @@ html_context = {
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ["_static"]
# These paths are either relative to html_static_path
# or fully qualified paths (eg. https://...)
html_css_files = [
"css/custom.css",
]
nb_execution_mode = "off"
myst_enable_extensions = ["colon_fence"]
html_static_path: list = []

@ -1,39 +0,0 @@
# Deployments
So you've made a really cool chain - now what? How do you deploy it and make it easily sharable with the world?
This section covers several options for that.
Note that these are meant as quick deployment options for prototypes and demos, and not for production systems.
If you are looking for help with deployment of a production system, please contact us directly.
What follows is a list of template GitHub repositories aimed that are intended to be
very easy to fork and modify to use your chain.
This is far from an exhaustive list of options, and we are EXTREMELY open to contributions here.
## [Streamlit](https://github.com/hwchase17/langchain-streamlit-template)
This repo serves as a template for how to deploy a LangChain with Streamlit.
It implements a chatbot interface.
It also contains instructions for how to deploy this app on the Streamlit platform.
## [Gradio (on Hugging Face)](https://github.com/hwchase17/langchain-gradio-template)
This repo serves as a template for how deploy a LangChain with Gradio.
It implements a chatbot interface, with a "Bring-Your-Own-Token" approach (nice for not wracking up big bills).
It also contains instructions for how to deploy this app on the Hugging Face platform.
This is heavily influenced by James Weaver's [excellent examples](https://huggingface.co/JavaFXpert).
## [Beam](https://github.com/slai-labs/get-beam/tree/main/examples/langchain-question-answering)
This repo serves as a template for how deploy a LangChain with [Beam](https://beam.cloud).
It implements a Question Answering app and contains instructions for deploying the app as a serverless REST API.
## [Vercel](https://github.com/homanp/vercel-langchain)
A minimal example on how to run LangChain on Vercel using Flask.
## [SteamShip](https://github.com/steamship-core/steamship-langchain/)
This repository contains LangChain adapters for Steamship, enabling LangChain developers to rapidly deploy their apps on Steamship.
This includes: production ready endpoints, horizontal scaling across dependencies, persistant storage of app state, multi-tenancy support, etc.

@ -1,10 +0,0 @@
LangChain Ecosystem
===================
Guides for how other companies/products can be used with LangChain
.. toctree::
:maxdepth: 1
:glob:
ecosystem/*

@ -1,16 +0,0 @@
# AI21 Labs
This page covers how to use the AI21 ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific AI21 wrappers.
## Installation and Setup
- Get an AI21 api key and set it as an environment variable (`AI21_API_KEY`)
## Wrappers
### LLM
There exists an AI21 LLM wrapper, which you can access with
```python
from langchain.llms import AI21
```

@ -1,25 +0,0 @@
# AtlasDB
This page covers how to Nomic's Atlas ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific Atlas wrappers.
## Installation and Setup
- Install the Python package with `pip install nomic`
- Nomic is also included in langchains poetry extras `poetry install -E all`
-
## Wrappers
### VectorStore
There exists a wrapper around the Atlas neural database, allowing you to use it as a vectorstore.
This vectorstore also gives you full access to the underlying AtlasProject object, which will allow you to use the full range of Atlas map interactions, such as bulk tagging and automatic topic modeling.
Please see [the Nomic docs](https://docs.nomic.ai/atlas_api.html) for more detailed information.
To import this vectorstore:
```python
from langchain.vectorstores import AtlasDB
```
For a more detailed walkthrough of the Chroma wrapper, see [this notebook](../modules/indexes/examples/vectorstores.ipynb)

@ -1,79 +0,0 @@
# Banana
This page covers how to use the Banana ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific Banana wrappers.
## Installation and Setup
- Install with `pip3 install banana-dev`
- Get an Banana api key and set it as an environment variable (`BANANA_API_KEY`)
## Define your Banana Template
If you want to use an available language model template you can find one [here](https://app.banana.dev/templates/conceptofmind/serverless-template-palmyra-base).
This template uses the Palmyra-Base model by [Writer](https://writer.com/product/api/).
You can check out an example Banana repository [here](https://github.com/conceptofmind/serverless-template-palmyra-base).
## Build the Banana app
Banana Apps must include the "output" key in the return json.
There is a rigid response structure.
```python
# Return the results as a dictionary
result = {'output': result}
```
An example inference function would be:
```python
def inference(model_inputs:dict) -> dict:
global model
global tokenizer
# Parse out your arguments
prompt = model_inputs.get('prompt', None)
if prompt == None:
return {'message': "No prompt provided"}
# Run the model
input_ids = tokenizer.encode(prompt, return_tensors='pt').cuda()
output = model.generate(
input_ids,
max_length=100,
do_sample=True,
top_k=50,
top_p=0.95,
num_return_sequences=1,
temperature=0.9,
early_stopping=True,
no_repeat_ngram_size=3,
num_beams=5,
length_penalty=1.5,
repetition_penalty=1.5,
bad_words_ids=[[tokenizer.encode(' ', add_prefix_space=True)[0]]]
)
result = tokenizer.decode(output[0], skip_special_tokens=True)
# Return the results as a dictionary
result = {'output': result}
return result
```
You can find a full example of a Banana app [here](https://github.com/conceptofmind/serverless-template-palmyra-base/blob/main/app.py).
## Wrappers
### LLM
There exists an Banana LLM wrapper, which you can access with
```python
from langchain.llms import Banana
```
You need to provide a model key located in the dashboard:
```python
llm = Banana(model_key="YOUR_MODEL_KEY")
```

@ -1,17 +0,0 @@
# CerebriumAI
This page covers how to use the CerebriumAI ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific CerebriumAI wrappers.
## Installation and Setup
- Install with `pip install cerebrium`
- Get an CerebriumAI api key and set it as an environment variable (`CEREBRIUMAI_API_KEY`)
## Wrappers
### LLM
There exists an CerebriumAI LLM wrapper, which you can access with
```python
from langchain.llms import CerebriumAI
```

@ -1,20 +0,0 @@
# Chroma
This page covers how to use the Chroma ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific Chroma wrappers.
## Installation and Setup
- Install the Python package with `pip install chromadb`
## Wrappers
### VectorStore
There exists a wrapper around Chroma vector databases, allowing you to use it as a vectorstore,
whether for semantic search or example selection.
To import this vectorstore:
```python
from langchain.vectorstores import Chroma
```
For a more detailed walkthrough of the Chroma wrapper, see [this notebook](../modules/indexes/examples/vectorstores.ipynb)

@ -1,25 +0,0 @@
# Cohere
This page covers how to use the Cohere ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific Cohere wrappers.
## Installation and Setup
- Install the Python SDK with `pip install cohere`
- Get an Cohere api key and set it as an environment variable (`COHERE_API_KEY`)
## Wrappers
### LLM
There exists an Cohere LLM wrapper, which you can access with
```python
from langchain.llms import Cohere
```
### Embeddings
There exists an Cohere Embeddings wrapper, which you can access with
```python
from langchain.embeddings import CohereEmbeddings
```
For a more detailed walkthrough of this, see [this notebook](../modules/indexes/examples/embeddings.ipynb)

@ -1,17 +0,0 @@
# DeepInfra
This page covers how to use the DeepInfra ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific DeepInfra wrappers.
## Installation and Setup
- Get your DeepInfra api key from this link [here](https://deepinfra.com/).
- Get an DeepInfra api key and set it as an environment variable (`DEEPINFRA_API_TOKEN`)
## Wrappers
### LLM
There exists an DeepInfra LLM wrapper, which you can access with
```python
from langchain.llms import DeepInfra
```

@ -1,25 +0,0 @@
# Deep Lake
This page covers how to use the Deep Lake ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific Deep Lake wrappers. For more information.
1. Here is [whitepaper](https://www.deeplake.ai/whitepaper) and [academic paper](https://arxiv.org/pdf/2209.10785.pdf) for Deep Lake
2. Here is a set of additional resources available for review: [Deep Lake](https://github.com/activeloopai/deeplake), [Getting Started](https://docs.activeloop.ai/getting-started) and [Tutorials](https://docs.activeloop.ai/hub-tutorials)
## Installation and Setup
- Install the Python package with `pip install deeplake`
## Wrappers
### VectorStore
There exists a wrapper around Deep Lake, a data lake for Deep Learning applications, allowing you to use it as a vectorstore (for now), whether for semantic search or example selection.
To import this vectorstore:
```python
from langchain.vectorstores import DeepLake
```
For a more detailed walkthrough of the Deep Lake wrapper, see [this notebook](../modules/indexes/vectorstore_examples/deeplake.ipynb)

@ -1,16 +0,0 @@
# ForefrontAI
This page covers how to use the ForefrontAI ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific ForefrontAI wrappers.
## Installation and Setup
- Get an ForefrontAI api key and set it as an environment variable (`FOREFRONTAI_API_KEY`)
## Wrappers
### LLM
There exists an ForefrontAI LLM wrapper, which you can access with
```python
from langchain.llms import ForefrontAI
```

@ -1,32 +0,0 @@
# Google Search Wrapper
This page covers how to use the Google Search API within LangChain.
It is broken into two parts: installation and setup, and then references to the specific Google Search wrapper.
## Installation and Setup
- Install requirements with `pip install google-api-python-client`
- Set up a Custom Search Engine, following [these instructions](https://stackoverflow.com/questions/37083058/programmatically-searching-google-in-python-using-custom-search)
- Get an API Key and Custom Search Engine ID from the previous step, and set them as environment variables `GOOGLE_API_KEY` and `GOOGLE_CSE_ID` respectively
## Wrappers
### Utility
There exists a GoogleSearchAPIWrapper utility which wraps this API. To import this utility:
```python
from langchain.utilities import GoogleSearchAPIWrapper
```
For a more detailed walkthrough of this wrapper, see [this notebook](../modules/utils/examples/google_search.ipynb).
### Tool
You can also easily load this wrapper as a Tool (to use with an Agent).
You can do this with:
```python
from langchain.agents import load_tools
tools = load_tools(["google-search"])
```
For more information on this, see [this page](../modules/agents/tools.md)

@ -1,71 +0,0 @@
# Google Serper Wrapper
This page covers how to use the [Serper](https://serper.dev) Google Search API within LangChain. Serper is a low-cost Google Search API that can be used to add answer box, knowledge graph, and organic results data from Google Search.
It is broken into two parts: setup, and then references to the specific Google Serper wrapper.
## Setup
- Go to [serper.dev](https://serper.dev) to sign up for a free account
- Get the api key and set it as an environment variable (`SERPER_API_KEY`)
## Wrappers
### Utility
There exists a GoogleSerperAPIWrapper utility which wraps this API. To import this utility:
```python
from langchain.utilities import GoogleSerperAPIWrapper
```
You can use it as part of a Self Ask chain:
```python
from langchain.utilities import GoogleSerperAPIWrapper
from langchain.llms.openai import OpenAI
from langchain.agents import initialize_agent, Tool
import os
os.environ["SERPER_API_KEY"] = ""
os.environ['OPENAI_API_KEY'] = ""
llm = OpenAI(temperature=0)
search = GoogleSerperAPIWrapper()
tools = [
Tool(
name="Intermediate Answer",
func=search.run
)
]
self_ask_with_search = initialize_agent(tools, llm, agent="self-ask-with-search", verbose=True)
self_ask_with_search.run("What is the hometown of the reigning men's U.S. Open champion?")
```
#### Output
```
Entering new AgentExecutor chain...
Yes.
Follow up: Who is the reigning men's U.S. Open champion?
Intermediate answer: Current champions Carlos Alcaraz, 2022 men's singles champion.
Follow up: Where is Carlos Alcaraz from?
Intermediate answer: El Palmar, Spain
So the final answer is: El Palmar, Spain
> Finished chain.
'El Palmar, Spain'
```
For a more detailed walkthrough of this wrapper, see [this notebook](../modules/utils/examples/google_serper.ipynb).
### Tool
You can also easily load this wrapper as a Tool (to use with an Agent).
You can do this with:
```python
from langchain.agents import load_tools
tools = load_tools(["google-serper"])
```
For more information on this, see [this page](../modules/agents/tools.md)

@ -1,23 +0,0 @@
# GooseAI
This page covers how to use the GooseAI ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific GooseAI wrappers.
## Installation and Setup
- Install the Python SDK with `pip install openai`
- Get your GooseAI api key from this link [here](https://goose.ai/).
- Set the environment variable (`GOOSEAI_API_KEY`).
```python
import os
os.environ["GOOSEAI_API_KEY"] = "YOUR_API_KEY"
```
## Wrappers
### LLM
There exists an GooseAI LLM wrapper, which you can access with:
```python
from langchain.llms import GooseAI
```

@ -1,38 +0,0 @@
# Graphsignal
This page covers how to use the Graphsignal to trace and monitor LangChain.
## Installation and Setup
- Install the Python library with `pip install graphsignal`
- Create free Graphsignal account [here](https://graphsignal.com)
- Get an API key and set it as an environment variable (`GRAPHSIGNAL_API_KEY`)
## Tracing and Monitoring
Graphsignal automatically instruments and starts tracing and monitoring chains. Traces, metrics and errors are then available in your [Graphsignal dashboard](https://app.graphsignal.com/). No prompts or other sensitive data are sent to Graphsignal cloud, only statistics and metadata.
Initialize the tracer by providing a deployment name:
```python
import graphsignal
graphsignal.configure(deployment='my-langchain-app-prod')
```
In order to trace full runs and see a breakdown by chains and tools, you can wrap the calling routine or use a decorator:
```python
with graphsignal.start_trace('my-chain'):
chain.run("some initial text")
```
Optionally, enable profiling to record function-level statistics for each trace.
```python
with graphsignal.start_trace(
'my-chain', options=graphsignal.TraceOptions(enable_profiling=True)):
chain.run("some initial text")
```
See the [Quick Start](https://graphsignal.com/docs/guides/quick-start/) guide for complete setup instructions.

@ -1,19 +0,0 @@
# Hazy Research
This page covers how to use the Hazy Research ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific Hazy Research wrappers.
## Installation and Setup
- To use the `manifest`, install it with `pip install manifest-ml`
## Wrappers
### LLM
There exists an LLM wrapper around Hazy Research's `manifest` library.
`manifest` is a python library which is itself a wrapper around many model providers, and adds in caching, history, and more.
To use this wrapper:
```python
from langchain.llms.manifest import ManifestWrapper
```

@ -1,53 +0,0 @@
# Helicone
This page covers how to use the [Helicone](https://helicone.ai) within LangChain.
## What is Helicone?
Helicone is an [open source](https://github.com/Helicone/helicone) observability platform that proxies your OpenAI traffic and provides you key insights into your spend, latency and usage.
![Helicone](../_static/HeliconeDashboard.png)
## Quick start
With your LangChain environment you can just add the following parameter.
```bash
export OPENAI_API_BASE="https://oai.hconeai.com/v1"
```
Now head over to [helicone.ai](https://helicone.ai/onboarding?step=2) to create your account, and add your OpenAI API key within our dashboard to view your logs.
![Helicone](../_static/HeliconeKeys.png)
## How to enable Helicone caching
```python
from langchain.llms import OpenAI
import openai
openai.api_base = "https://oai.hconeai.com/v1"
llm = OpenAI(temperature=0.9, headers={"Helicone-Cache-Enabled": "true"})
text = "What is a helicone?"
print(llm(text))
```
[Helicone caching docs](https://docs.helicone.ai/advanced-usage/caching)
## How to use Helicone custom properties
```python
from langchain.llms import OpenAI
import openai
openai.api_base = "https://oai.hconeai.com/v1"
llm = OpenAI(temperature=0.9, headers={
"Helicone-Property-Session": "24",
"Helicone-Property-Conversation": "support_issue_2",
"Helicone-Property-App": "mobile",
})
text = "What is a helicone?"
print(llm(text))
```
[Helicone property docs](https://docs.helicone.ai/advanced-usage/custom-properties)

@ -1,69 +0,0 @@
# Hugging Face
This page covers how to use the Hugging Face ecosystem (including the [Hugging Face Hub](https://huggingface.co)) within LangChain.
It is broken into two parts: installation and setup, and then references to specific Hugging Face wrappers.
## Installation and Setup
If you want to work with the Hugging Face Hub:
- Install the Hub client library with `pip install huggingface_hub`
- Create a Hugging Face account (it's free!)
- Create an [access token](https://huggingface.co/docs/hub/security-tokens) and set it as an environment variable (`HUGGINGFACEHUB_API_TOKEN`)
If you want work with the Hugging Face Python libraries:
- Install `pip install transformers` for working with models and tokenizers
- Install `pip install datasets` for working with datasets
## Wrappers
### LLM
There exists two Hugging Face LLM wrappers, one for a local pipeline and one for a model hosted on Hugging Face Hub.
Note that these wrappers only work for models that support the following tasks: [`text2text-generation`](https://huggingface.co/models?library=transformers&pipeline_tag=text2text-generation&sort=downloads), [`text-generation`](https://huggingface.co/models?library=transformers&pipeline_tag=text-classification&sort=downloads)
To use the local pipeline wrapper:
```python
from langchain.llms import HuggingFacePipeline
```
To use a the wrapper for a model hosted on Hugging Face Hub:
```python
from langchain.llms import HuggingFaceHub
```
For a more detailed walkthrough of the Hugging Face Hub wrapper, see [this notebook](../modules/llms/integrations/huggingface_hub.ipynb)
### Embeddings
There exists two Hugging Face Embeddings wrappers, one for a local model and one for a model hosted on Hugging Face Hub.
Note that these wrappers only work for [`sentence-transformers` models](https://huggingface.co/models?library=sentence-transformers&sort=downloads).
To use the local pipeline wrapper:
```python
from langchain.embeddings import HuggingFaceEmbeddings
```
To use a the wrapper for a model hosted on Hugging Face Hub:
```python
from langchain.embeddings import HuggingFaceHubEmbeddings
```
For a more detailed walkthrough of this, see [this notebook](../modules/indexes/examples/embeddings.ipynb)
### Tokenizer
There are several places you can use tokenizers available through the `transformers` package.
By default, it is used to count tokens for all LLMs.
You can also use it to count tokens when splitting documents with
```python
from langchain.text_splitter import CharacterTextSplitter
CharacterTextSplitter.from_huggingface_tokenizer(...)
```
For a more detailed walkthrough of this, see [this notebook](../modules/indexes/examples/textsplitter.ipynb)
### Datasets
The Hugging Face Hub has lots of great [datasets](https://huggingface.co/datasets) that can be used to evaluate your LLM chains.
For a detailed walkthrough of how to use them to do so, see [this notebook](../use_cases/evaluation/huggingface_datasets.ipynb)

@ -1,66 +0,0 @@
# Modal
This page covers how to use the Modal ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific Modal wrappers.
## Installation and Setup
- Install with `pip install modal-client`
- Run `modal token new`
## Define your Modal Functions and Webhooks
You must include a prompt. There is a rigid response structure.
```python
class Item(BaseModel):
prompt: str
@stub.webhook(method="POST")
def my_webhook(item: Item):
return {"prompt": my_function.call(item.prompt)}
```
An example with GPT2:
```python
from pydantic import BaseModel
import modal
stub = modal.Stub("example-get-started")
volume = modal.SharedVolume().persist("gpt2_model_vol")
CACHE_PATH = "/root/model_cache"
@stub.function(
gpu="any",
image=modal.Image.debian_slim().pip_install(
"tokenizers", "transformers", "torch", "accelerate"
),
shared_volumes={CACHE_PATH: volume},
retries=3,
)
def run_gpt2(text: str):
from transformers import GPT2Tokenizer, GPT2LMHeadModel
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
model = GPT2LMHeadModel.from_pretrained('gpt2')
encoded_input = tokenizer(text, return_tensors='pt').input_ids
output = model.generate(encoded_input, max_length=50, do_sample=True)
return tokenizer.decode(output[0], skip_special_tokens=True)
class Item(BaseModel):
prompt: str
@stub.webhook(method="POST")
def get_text(item: Item):
return {"prompt": run_gpt2.call(item.prompt)}
```
## Wrappers
### LLM
There exists an Modal LLM wrapper, which you can access with
```python
from langchain.llms import Modal
```

@ -1,17 +0,0 @@
# NLPCloud
This page covers how to use the NLPCloud ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific NLPCloud wrappers.
## Installation and Setup
- Install the Python SDK with `pip install nlpcloud`
- Get an NLPCloud api key and set it as an environment variable (`NLPCLOUD_API_KEY`)
## Wrappers
### LLM
There exists an NLPCloud LLM wrapper, which you can access with
```python
from langchain.llms import NLPCloud
```

@ -1,55 +0,0 @@
# OpenAI
This page covers how to use the OpenAI ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific OpenAI wrappers.
## Installation and Setup
- Install the Python SDK with `pip install openai`
- Get an OpenAI api key and set it as an environment variable (`OPENAI_API_KEY`)
- If you want to use OpenAI's tokenizer (only available for Python 3.9+), install it with `pip install tiktoken`
## Wrappers
### LLM
There exists an OpenAI LLM wrapper, which you can access with
```python
from langchain.llms import OpenAI
```
If you are using a model hosted on Azure, you should use different wrapper for that:
```python
from langchain.llms import AzureOpenAI
```
For a more detailed walkthrough of the Azure wrapper, see [this notebook](../modules/llms/integrations/azure_openai_example.ipynb)
### Embeddings
There exists an OpenAI Embeddings wrapper, which you can access with
```python
from langchain.embeddings import OpenAIEmbeddings
```
For a more detailed walkthrough of this, see [this notebook](../modules/indexes/examples/embeddings.ipynb)
### Tokenizer
There are several places you can use the `tiktoken` tokenizer. By default, it is used to count tokens
for OpenAI LLMs.
You can also use it to count tokens when splitting documents with
```python
from langchain.text_splitter import CharacterTextSplitter
CharacterTextSplitter.from_tiktoken_encoder(...)
```
For a more detailed walkthrough of this, see [this notebook](../modules/indexes/examples/textsplitter.ipynb)
### Moderation
You can also access the OpenAI content moderation endpoint with
```python
from langchain.chains import OpenAIModerationChain
```
For a more detailed walkthrough of this, see [this notebook](../modules/chains/examples/moderation.ipynb)

@ -1,21 +0,0 @@
# OpenSearch
This page covers how to use the OpenSearch ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific OpenSearch wrappers.
## Installation and Setup
- Install the Python package with `pip install opensearch-py`
## Wrappers
### VectorStore
There exists a wrapper around OpenSearch vector databases, allowing you to use it as a vectorstore
for semantic search using approximate vector search powered by lucene, nmslib and faiss engines
or using painless scripting and script scoring functions for bruteforce vector search.
To import this vectorstore:
```python
from langchain.vectorstores import OpenSearchVectorSearch
```
For a more detailed walkthrough of the OpenSearch wrapper, see [this notebook](../modules/indexes/vectorstore_examples/opensearch.ipynb)

@ -1,17 +0,0 @@
# Petals
This page covers how to use the Petals ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific Petals wrappers.
## Installation and Setup
- Install with `pip install petals`
- Get a Hugging Face api key and set it as an environment variable (`HUGGINGFACE_API_KEY`)
## Wrappers
### LLM
There exists an Petals LLM wrapper, which you can access with
```python
from langchain.llms import Petals
```

@ -1,20 +0,0 @@
# Pinecone
This page covers how to use the Pinecone ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific Pinecone wrappers.
## Installation and Setup
- Install the Python SDK with `pip install pinecone-client`
## Wrappers
### VectorStore
There exists a wrapper around Pinecone indexes, allowing you to use it as a vectorstore,
whether for semantic search or example selection.
To import this vectorstore:
```python
from langchain.vectorstores import Pinecone
```
For a more detailed walkthrough of the Pinecone wrapper, see [this notebook](../modules/indexes/examples/vectorstores.ipynb)

@ -1,31 +0,0 @@
# PromptLayer
This page covers how to use [PromptLayer](https://www.promptlayer.com) within LangChain.
It is broken into two parts: installation and setup, and then references to specific PromptLayer wrappers.
## Installation and Setup
If you want to work with PromptLayer:
- Install the promptlayer python library `pip install promptlayer`
- Create a PromptLayer account
- Create an api token and set it as an environment variable (`PROMPTLAYER_API_KEY`)
## Wrappers
### LLM
There exists an PromptLayer OpenAI LLM wrapper, which you can access with
```python
from langchain.llms import PromptLayerOpenAI
```
To tag your requests, use the argument `pl_tags` when instanializing the LLM
```python
from langchain.llms import PromptLayerOpenAI
llm = PromptLayerOpenAI(pl_tags=["langchain-requests", "chatbot"])
```
This LLM is identical to the [OpenAI LLM](./openai), except that
- all your requests will be logged to your PromptLayer account
- you can add `pl_tags` when instantializing to tag your requests on PromptLayer

@ -1,31 +0,0 @@
# Runhouse
This page covers how to use the [Runhouse](https://github.com/run-house/runhouse) ecosystem within LangChain.
It is broken into three parts: installation and setup, LLMs, and Embeddings.
## Installation and Setup
- Install the Python SDK with `pip install runhouse`
- If you'd like to use on-demand cluster, check your cloud credentials with `sky check`
## Self-hosted LLMs
For a basic self-hosted LLM, you can use the `SelfHostedHuggingFaceLLM` class. For more
custom LLMs, you can use the `SelfHostedPipeline` parent class.
```python
from langchain.llms import SelfHostedPipeline, SelfHostedHuggingFaceLLM
```
For a more detailed walkthrough of the Self-hosted LLMs, see [this notebook](../modules/llms/integrations/self_hosted_examples.ipynb)
## Self-hosted Embeddings
There are several ways to use self-hosted embeddings with LangChain via Runhouse.
For a basic self-hosted embedding from a Hugging Face Transformers model, you can use
the `SelfHostedEmbedding` class.
```python
from langchain.llms import SelfHostedPipeline, SelfHostedHuggingFaceLLM
```
For a more detailed walkthrough of the Self-hosted Embeddings, see [this notebook](../modules/indexes/examples/embeddings.ipynb)
##

@ -1,35 +0,0 @@
# SearxNG Search API
This page covers how to use the SearxNG search API within LangChain.
It is broken into two parts: installation and setup, and then references to the specific SearxNG API wrapper.
## Installation and Setup
- You can find a list of public SearxNG instances [here](https://searx.space/).
- It recommended to use a self-hosted instance to avoid abuse on the public instances. Also note that public instances often have a limit on the number of requests.
- To run a self-hosted instance see [this page](https://searxng.github.io/searxng/admin/installation.html) for more information.
- To use the tool you need to provide the searx host url by:
1. passing the named parameter `searx_host` when creating the instance.
2. exporting the environment variable `SEARXNG_HOST`.
## Wrappers
### Utility
You can use the wrapper to get results from a SearxNG instance.
```python
from langchain.utilities import SearxSearchWrapper
```
### Tool
You can also easily load this wrapper as a Tool (to use with an Agent).
You can do this with:
```python
from langchain.agents import load_tools
tools = load_tools(["searx-search"], searx_host="https://searx.example.com")
```
For more information on this, see [this page](../modules/agents/tools.md)

@ -1,31 +0,0 @@
# SerpAPI
This page covers how to use the SerpAPI search APIs within LangChain.
It is broken into two parts: installation and setup, and then references to the specific SerpAPI wrapper.
## Installation and Setup
- Install requirements with `pip install google-search-results`
- Get a SerpAPI api key and either set it as an environment variable (`SERPAPI_API_KEY`)
## Wrappers
### Utility
There exists a SerpAPI utility which wraps this API. To import this utility:
```python
from langchain.utilities import SerpAPIWrapper
```
For a more detailed walkthrough of this wrapper, see [this notebook](../modules/utils/examples/serpapi.ipynb).
### Tool
You can also easily load this wrapper as a Tool (to use with an Agent).
You can do this with:
```python
from langchain.agents import load_tools
tools = load_tools(["serpapi"])
```
For more information on this, see [this page](../modules/agents/tools.md)

@ -1,17 +0,0 @@
# StochasticAI
This page covers how to use the StochasticAI ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific StochasticAI wrappers.
## Installation and Setup
- Install with `pip install stochasticx`
- Get an StochasticAI api key and set it as an environment variable (`STOCHASTICAI_API_KEY`)
## Wrappers
### LLM
There exists an StochasticAI LLM wrapper, which you can access with
```python
from langchain.llms import StochasticAI
```

@ -1,41 +0,0 @@
# Unstructured
This page covers how to use the [`unstructured`](https://github.com/Unstructured-IO/unstructured)
ecosystem within LangChain. The `unstructured` package from
[Unstructured.IO](https://www.unstructured.io/) extracts clean text from raw source documents like
PDFs and Word documents.
This page is broken into two parts: installation and setup, and then references to specific
`unstructured` wrappers.
## Installation and Setup
- Install the Python SDK with `pip install "unstructured[local-inference]"`
- Install the following system dependencies if they are not already available on your system.
Depending on what document types you're parsing, you may not need all of these.
- `libmagic-dev`
- `poppler-utils`
- `tesseract-ocr`
- `libreoffice`
- If you are parsing PDFs, run the following to install the `detectron2` model, which
`unstructured` uses for layout detection:
- `pip install "detectron2@git+https://github.com/facebookresearch/detectron2.git@v0.6#egg=detectron2"`
## Wrappers
### Data Loaders
The primary `unstructured` wrappers within `langchain` are data loaders. The following
shows how to use the most basic unstructured data loader. There are other file-specific
data loaders available in the `langchain.document_loaders` module.
```python
from langchain.document_loaders import UnstructuredFileLoader
loader = UnstructuredFileLoader("state_of_the_union.txt")
loader.load()
```
If you instantiate the loader with `UnstructuredFileLoader(mode="elements")`, the loader
will track additional metadata like the page number and text type (i.e. title, narrative text)
when that information is available.

@ -1,33 +0,0 @@
# Weaviate
This page covers how to use the Weaviate ecosystem within LangChain.
What is Weaviate?
**Weaviate in a nutshell:**
- Weaviate is an open-source database of the type vector search engine.
- Weaviate allows you to store JSON documents in a class property-like fashion while attaching machine learning vectors to these documents to represent them in vector space.
- Weaviate can be used stand-alone (aka bring your vectors) or with a variety of modules that can do the vectorization for you and extend the core capabilities.
- Weaviate has a GraphQL-API to access your data easily.
- We aim to bring your vector search set up to production to query in mere milliseconds (check our [open source benchmarks](https://weaviate.io/developers/weaviate/current/benchmarks/) to see if Weaviate fits your use case).
- Get to know Weaviate in the [basics getting started guide](https://weaviate.io/developers/weaviate/current/core-knowledge/basics.html) in under five minutes.
**Weaviate in detail:**
Weaviate is a low-latency vector search engine with out-of-the-box support for different media types (text, images, etc.). It offers Semantic Search, Question-Answer Extraction, Classification, Customizable Models (PyTorch/TensorFlow/Keras), etc. Built from scratch in Go, Weaviate stores both objects and vectors, allowing for combining vector search with structured filtering and the fault tolerance of a cloud-native database. It is all accessible through GraphQL, REST, and various client-side programming languages.
## Installation and Setup
- Install the Python SDK with `pip install weaviate-client`
## Wrappers
### VectorStore
There exists a wrapper around Weaviate indexes, allowing you to use it as a vectorstore,
whether for semantic search or example selection.
To import this vectorstore:
```python
from langchain.vectorstores import Weaviate
```
For a more detailed walkthrough of the Weaviate wrapper, see [this notebook](../modules/indexes/examples/vectorstores.ipynb)

@ -1,34 +0,0 @@
# Wolfram Alpha Wrapper
This page covers how to use the Wolfram Alpha API within LangChain.
It is broken into two parts: installation and setup, and then references to specific Wolfram Alpha wrappers.
## Installation and Setup
- Install requirements with `pip install wolframalpha`
- Go to wolfram alpha and sign up for a developer account [here](https://developer.wolframalpha.com/)
- Create an app and get your APP ID
- Set your APP ID as an environment variable `WOLFRAM_ALPHA_APPID`
## Wrappers
### Utility
There exists a WolframAlphaAPIWrapper utility which wraps this API. To import this utility:
```python
from langchain.utilities.wolfram_alpha import WolframAlphaAPIWrapper
```
For a more detailed walkthrough of this wrapper, see [this notebook](../modules/utils/examples/wolfram_alpha.ipynb).
### Tool
You can also easily load this wrapper as a Tool (to use with an Agent).
You can do this with:
```python
from langchain.agents import load_tools
tools = load_tools(["wolfram-alpha"])
```
For more information on this, see [this page](../modules/agents/tools.md)

@ -1,16 +0,0 @@
# Writer
This page covers how to use the Writer ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific Writer wrappers.
## Installation and Setup
- Get an Writer api key and set it as an environment variable (`WRITER_API_KEY`)
## Wrappers
### LLM
There exists an Writer LLM wrapper, which you can access with
```python
from langchain.llms import Writer
```

@ -0,0 +1,47 @@
Agents
======
The examples here are all end-to-end agents for specific applications.
In all examples there is an Agent with a particular set of tools.
- Tools: A tool can be anything that takes in a string and returns a string. This means that you can use both the primitives AND the chains found in `this <chains.rst>`_ documentation. LangChain also provides a list of easily loadable tools. For detailed information on those, please see `this documentation <../explanation/tools.md>`_
- Agents: An agent uses an LLMChain to determine which tools to use. For a list of all available agent types, see `here <../explanation/agents.md>`_.
**MRKL**
- **Tools used**: Search, SQLDatabaseChain, LLMMathChain
- **Agent used**: `zero-shot-react-description`
- `Paper <https://arxiv.org/pdf/2205.00445.pdf>`_
- **Note**: This is the most general purpose example, so if you are looking to use an agent with arbitrary tools, please start here.
- `Example Notebook <agents/mrkl.ipynb>`_
**Self-Ask-With-Search**
- **Tools used**: Search
- **Agent used**: `self-ask-with-search`
- `Paper <https://ofir.io/self-ask.pdf>`_
- `Example Notebook <agents/self_ask_with_search.ipynb>`_
**ReAct**
- **Tools used**: Wikipedia Docstore
- **Agent used**: `react-docstore`
- `Paper <https://arxiv.org/pdf/2210.03629.pdf>`_
- `Example Notebook <agents/react.ipynb>`_
Additionally, we also provide examples for how to do more customizability:
**Custom Agent**
- Purpose: How to create custom agents.
- `Example Notebook <agents/custom_agent.ipynb>`_
.. toctree::
:maxdepth: 1
:glob:
:hidden:
agents/*

@ -42,7 +42,7 @@
},
{
"cell_type": "code",
"execution_count": 23,
"execution_count": 1,
"id": "9af9734e",
"metadata": {},
"outputs": [],
@ -53,7 +53,7 @@
},
{
"cell_type": "code",
"execution_count": 24,
"execution_count": 2,
"id": "becda2a1",
"metadata": {},
"outputs": [],
@ -70,7 +70,7 @@
},
{
"cell_type": "code",
"execution_count": 25,
"execution_count": 3,
"id": "339b1bb8",
"metadata": {},
"outputs": [],
@ -99,7 +99,7 @@
},
{
"cell_type": "code",
"execution_count": 26,
"execution_count": 4,
"id": "e21d2098",
"metadata": {},
"outputs": [
@ -133,19 +133,9 @@
"print(prompt.template)"
]
},
{
"cell_type": "markdown",
"id": "5e028e6d",
"metadata": {},
"source": [
"Note that we are able to feed agents a self-defined prompt template, i.e. not restricted to the prompt generated by the `create_prompt` function, assuming it meets the agent's requirements. \n",
"\n",
"For example, for `ZeroShotAgent`, we will need to ensure that it meets the following requirements. There should a string starting with \"Action:\" and a following string starting with \"Action Input:\", and both should be separated by a newline.\n"
]
},
{
"cell_type": "code",
"execution_count": 27,
"execution_count": 5,
"id": "9b1cc2a2",
"metadata": {},
"outputs": [],
@ -155,18 +145,17 @@
},
{
"cell_type": "code",
"execution_count": 28,
"execution_count": 6,
"id": "e4f5092f",
"metadata": {},
"outputs": [],
"source": [
"tool_names = [tool.name for tool in tools]\n",
"agent = ZeroShotAgent(llm_chain=llm_chain, allowed_tools=tool_names)"
"agent = ZeroShotAgent(llm_chain=llm_chain, tools=tools)"
]
},
{
"cell_type": "code",
"execution_count": 29,
"execution_count": 7,
"id": "490604e9",
"metadata": {},
"outputs": [],
@ -176,7 +165,7 @@
},
{
"cell_type": "code",
"execution_count": 31,
"execution_count": 8,
"id": "653b1617",
"metadata": {},
"outputs": [
@ -187,29 +176,28 @@
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3mThought: I need to find out the population of Canada\n",
"\u001b[32;1m\u001b[1;3mThought: I need to find out how many people live in Canada\n",
"Action: Search\n",
"Action Input: Population of Canada 2023\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mThe current population of Canada is 38,610,447 as of Saturday, February 18, 2023, based on Worldometer elaboration of the latest United Nations data. Canada 2020 population is estimated at 37,742,154 people at mid year according to UN data.\u001b[0m\n",
"Action Input: Population of Canada\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mThe current population of Canada is 38,553,548 as of Saturday, December 17, 2022, based on Worldometer elaboration of the latest United Nations data. Canada ...\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I now know the final answer\n",
"Final Answer: Arrr, Canada be havin' 38,610,447 scallywags livin' there as of 2023!\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
"Final Answer: Arrr, there be 38,553,548 scallywags livin' in Canada!\u001b[0m\n",
"\u001b[1m> Finished AgentExecutor chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"\"Arrr, Canada be havin' 38,610,447 scallywags livin' there as of 2023!\""
"\"Arrr, there be 38,553,548 scallywags livin' in Canada!\""
]
},
"execution_count": 31,
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"agent_executor.run(\"How many people live in canada as of 2023?\")"
"agent_executor.run(\"How many people live in canada?\")"
]
},
{
@ -223,7 +211,7 @@
},
{
"cell_type": "code",
"execution_count": 32,
"execution_count": 9,
"id": "43dbfa2f",
"metadata": {},
"outputs": [],
@ -244,7 +232,7 @@
},
{
"cell_type": "code",
"execution_count": 33,
"execution_count": 10,
"id": "0f087313",
"metadata": {},
"outputs": [],
@ -254,7 +242,7 @@
},
{
"cell_type": "code",
"execution_count": 34,
"execution_count": 11,
"id": "92c75a10",
"metadata": {},
"outputs": [],
@ -264,7 +252,7 @@
},
{
"cell_type": "code",
"execution_count": 35,
"execution_count": 12,
"id": "ac5b83bf",
"metadata": {},
"outputs": [],
@ -274,7 +262,7 @@
},
{
"cell_type": "code",
"execution_count": 36,
"execution_count": 14,
"id": "c960e4ff",
"metadata": {},
"outputs": [
@ -285,29 +273,28 @@
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3mThought: I need to find out the population of Canada in 2023.\n",
"\u001b[32;1m\u001b[1;3mThought: I should look up the population of Canada.\n",
"Action: Search\n",
"Action Input: Population of Canada in 2023\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mThe current population of Canada is 38,610,447 as of Saturday, February 18, 2023, based on Worldometer elaboration of the latest United Nations data. Canada 2020 population is estimated at 37,742,154 people at mid year according to UN data.\u001b[0m\n",
"Action Input: Population of Canada\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mThe current population of Canada is 38,553,548 as of Saturday, December 17, 2022, based on Worldometer elaboration of the latest United Nations data. Canada ...\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I now know the final answer.\n",
"Final Answer: La popolazione del Canada nel 2023 è stimata in 38.610.447 persone.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
"Final Answer: La popolazione attuale del Canada è 38.553.548 al sabato 17 dicembre 2022, secondo l'elaborazione di Worldometer dei dati più recenti delle Nazioni Unite.\u001b[0m\n",
"\u001b[1m> Finished AgentExecutor chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'La popolazione del Canada nel 2023 è stimata in 38.610.447 persone.'"
"\"La popolazione attuale del Canada è 38.553.548 al sabato 17 dicembre 2022, secondo l'elaborazione di Worldometer dei dati più recenti delle Nazioni Unite.\""
]
},
"execution_count": 36,
"execution_count": 14,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"agent_executor.run(input=\"How many people live in canada as of 2023?\", language=\"italian\")"
"agent_executor.run(input=\"How many people live in canada?\", language=\"italian\")"
]
},
{
@ -345,12 +332,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
},
"vscode": {
"interpreter": {
"hash": "18784188d7ecd866c0586ac068b02361a6896dc3a29b64f5cc957f09c590acef"
}
"version": "3.10.8"
}
},
"nbformat": 4,

@ -40,7 +40,7 @@
"llm = OpenAI(temperature=0)\n",
"search = SerpAPIWrapper()\n",
"llm_math_chain = LLMMathChain(llm=llm, verbose=True)\n",
"db = SQLDatabase.from_uri(\"sqlite:///../../../../notebooks/Chinook.db\")\n",
"db = SQLDatabase.from_uri(\"sqlite:///../../../notebooks/Chinook.db\")\n",
"db_chain = SQLDatabaseChain(llm=llm, database=db, verbose=True)\n",
"tools = [\n",
" Tool(\n",
@ -83,42 +83,42 @@
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m I need to find out who Leo DiCaprio's girlfriend is and then calculate her age raised to the 0.43 power.\n",
"\u001b[1m> Entering new AgentWithTools chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m I need to find out who Olivia Wilde's boyfriend is and then calculate his age raised to the 0.23 power.\n",
"Action: Search\n",
"Action Input: \"Who is Leo DiCaprio's girlfriend?\"\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mCamila Morrone\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I need to find out Camila Morrone's age\n",
"Action Input: \"Who is Olivia Wilde's boyfriend?\"\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mOlivia Wilde started dating Harry Styles after ending her years-long engagement to Jason Sudeikis — see their relationship timeline.\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I need to find out Harry Styles' age.\n",
"Action: Search\n",
"Action Input: \"How old is Camila Morrone?\"\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3m25 years\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I need to calculate 25 raised to the 0.43 power\n",
"Action Input: \"How old is Harry Styles?\"\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3m28 years\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I need to calculate 28 raised to the 0.23 power.\n",
"Action: Calculator\n",
"Action Input: 25^0.43\u001b[0m\n",
"Action Input: 28^0.23\u001b[0m\n",
"\n",
"\u001b[1m> Entering new LLMMathChain chain...\u001b[0m\n",
"25^0.43\u001b[32;1m\u001b[1;3m\n",
"28^0.23\u001b[32;1m\u001b[1;3m\n",
"\n",
"```python\n",
"import math\n",
"print(math.pow(25, 0.43))\n",
"print(math.pow(28, 0.23))\n",
"```\n",
"\u001b[0m\n",
"Answer: \u001b[33;1m\u001b[1;3m3.991298452658078\n",
"Answer: \u001b[33;1m\u001b[1;3m2.1520202182226886\n",
"\u001b[0m\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
"\u001b[1m> Finished LLMMathChain chain.\u001b[0m\n",
"\n",
"Observation: \u001b[33;1m\u001b[1;3mAnswer: 3.991298452658078\n",
"Observation: \u001b[33;1m\u001b[1;3mAnswer: 2.1520202182226886\n",
"\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I now know the final answer\n",
"Final Answer: Camila Morrone is 25 years old and her age raised to the 0.43 power is 3.991298452658078.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
"Thought:\u001b[32;1m\u001b[1;3m I now know the final answer.\n",
"Final Answer: Harry Styles, Olivia Wilde's boyfriend, is 28 years old and his age raised to the 0.23 power is 2.1520202182226886.\u001b[0m\n",
"\u001b[1m> Finished AgentWithTools chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'Camila Morrone is 25 years old and her age raised to the 0.43 power is 3.991298452658078.'"
"\"Harry Styles, Olivia Wilde's boyfriend, is 28 years old and his age raised to the 0.23 power is 2.1520202182226886.\""
]
},
"execution_count": 4,
@ -127,7 +127,7 @@
}
],
"source": [
"mrkl.run(\"Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?\")"
"mrkl.run(\"Who is Olivia Wilde's boyfriend? What is his current age raised to the 0.23 power?\")"
]
},
{
@ -142,7 +142,7 @@
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[1m> Entering new AgentWithTools chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m I need to find out the artist's full name and then search the FooBar database for their albums.\n",
"Action: Search\n",
"Action Input: \"The Storm Before the Calm\" artist\u001b[0m\n",
@ -153,22 +153,21 @@
"\n",
"\u001b[1m> Entering new SQLDatabaseChain chain...\u001b[0m\n",
"What albums by Alanis Morissette are in the FooBar database? \n",
"SQLQuery:\u001b[32;1m\u001b[1;3m SELECT Title FROM Album INNER JOIN Artist ON Album.ArtistId = Artist.ArtistId WHERE Artist.Name = 'Alanis Morissette' LIMIT 5;\u001b[0m\n",
"SQLQuery:\u001b[32;1m\u001b[1;3m SELECT Title FROM Album WHERE ArtistId IN (SELECT ArtistId FROM Artist WHERE Name = 'Alanis Morissette');\u001b[0m\n",
"SQLResult: \u001b[33;1m\u001b[1;3m[('Jagged Little Pill',)]\u001b[0m\n",
"Answer:\u001b[32;1m\u001b[1;3m The albums by Alanis Morissette in the FooBar database are Jagged Little Pill.\u001b[0m\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
"Answer:\u001b[32;1m\u001b[1;3m The album Jagged Little Pill by Alanis Morissette is in the FooBar database.\u001b[0m\n",
"\u001b[1m> Finished SQLDatabaseChain chain.\u001b[0m\n",
"\n",
"Observation: \u001b[38;5;200m\u001b[1;3m The albums by Alanis Morissette in the FooBar database are Jagged Little Pill.\u001b[0m\n",
"Observation: \u001b[38;5;200m\u001b[1;3m The album Jagged Little Pill by Alanis Morissette is in the FooBar database.\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I now know the final answer\n",
"Final Answer: The artist who released the album The Storm Before the Calm is Alanis Morissette and the albums of theirs in the FooBar database are Jagged Little Pill.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
"Final Answer: Alanis Morissette and Jagged Little Pill are in the FooBar database.\u001b[0m\n",
"\u001b[1m> Finished AgentWithTools chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'The artist who released the album The Storm Before the Calm is Alanis Morissette and the albums of theirs in the FooBar database are Jagged Little Pill.'"
"'Alanis Morissette and Jagged Little Pill are in the FooBar database.'"
]
},
"execution_count": 5,
@ -205,7 +204,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
"version": "3.10.8"
}
},
"nbformat": 4,

@ -2,7 +2,7 @@
import time
from langchain.chains.natbot.base import NatBotChain
from langchain.chains.natbot.crawler import Crawler
from langchain.chains.natbot.crawler import Crawler # type: ignore
def run_cmd(cmd: str, _crawler: Crawler) -> None:
@ -33,6 +33,7 @@ def run_cmd(cmd: str, _crawler: Crawler) -> None:
if __name__ == "__main__":
objective = "Make a reservation for 2 at 7pm at bistro vida in menlo park"
print("\nWelcome to natbot! What is your objective?")
i = input()

@ -12,7 +12,7 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 1,
"id": "4e272b47",
"metadata": {},
"outputs": [],
@ -38,7 +38,7 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 2,
"id": "8078c8f1",
"metadata": {},
"outputs": [
@ -48,10 +48,9 @@
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[1m> Entering new AgentWithTools chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m\n",
"Thought 1: I need to search David Chanoff and find the U.S. Navy admiral he collaborated\n",
"with.\n",
"Thought 1: I need to search David Chanoff and find the U.S. Navy admiral he collaborated with.\n",
"Action 1: Search[David Chanoff]\u001b[0m\n",
"Observation 1: \u001b[36;1m\u001b[1;3mDavid Chanoff is a noted author of non-fiction work. His work has typically involved collaborations with the principal protagonist of the work concerned. His collaborators have included; Augustus A. White, Joycelyn Elders, Đoàn Văn Toại, William J. Crowe, Ariel Sharon, Kenneth Good and Felix Zandman. He has also written about a wide range of subjects including literary history, education and foreign for The Washington Post, The New Republic and The New York Times Magazine. He has published more than twelve books.\u001b[0m\n",
"Thought 2:\u001b[32;1m\u001b[1;3m The U.S. Navy admiral David Chanoff collaborated with is William J. Crowe.\n",
@ -59,7 +58,7 @@
"Observation 2: \u001b[36;1m\u001b[1;3mWilliam James Crowe Jr. (January 2, 1925 October 18, 2007) was a United States Navy admiral and diplomat who served as the 11th chairman of the Joint Chiefs of Staff under Presidents Ronald Reagan and George H. W. Bush, and as the ambassador to the United Kingdom and Chair of the Intelligence Oversight Board under President Bill Clinton.\u001b[0m\n",
"Thought 3:\u001b[32;1m\u001b[1;3m The President William J. Crowe served as the ambassador to the United Kingdom under is Bill Clinton.\n",
"Action 3: Finish[Bill Clinton]\u001b[0m\n",
"\u001b[1m> Finished AgentExecutor chain.\u001b[0m\n"
"\u001b[1m> Finished AgentWithTools chain.\u001b[0m\n"
]
},
{
@ -68,7 +67,7 @@
"'Bill Clinton'"
]
},
"execution_count": 5,
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
@ -77,11 +76,19 @@
"question = \"Author David Chanoff has collaborated with a U.S. Navy admiral who served as the ambassador to the United Kingdom under which President?\"\n",
"react.run(question)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "75f914ba",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3.9.0 64-bit ('llm-env')",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
@ -95,12 +102,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.0"
},
"vscode": {
"interpreter": {
"hash": "b1677b440931f40d89ef8be7bf03acb108ce003de0ac9b18e8d43753ea2e7103"
}
"version": "3.10.8"
}
},
"nbformat": 4,

@ -12,7 +12,7 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 1,
"id": "7e3b513e",
"metadata": {},
"outputs": [
@ -22,14 +22,14 @@
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[1m> Entering new AgentWithTools chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m Yes.\n",
"Follow up: Who is the reigning men's U.S. Open champion?\u001b[0m\n",
"Intermediate answer: \u001b[36;1m\u001b[1;3mCarlos Alcaraz won the 2022 Men's single title while Poland's Iga Swiatek won the Women's single title defeating Tunisian's Ons Jabeur.\u001b[0m\n",
"Intermediate answer: \u001b[36;1m\u001b[1;3mCarlos Alcaraz\u001b[0m\n",
"\u001b[32;1m\u001b[1;3mFollow up: Where is Carlos Alcaraz from?\u001b[0m\n",
"Intermediate answer: \u001b[36;1m\u001b[1;3mEl Palmar, Spain\u001b[0m\n",
"\u001b[32;1m\u001b[1;3mSo the final answer is: El Palmar, Spain\u001b[0m\n",
"\u001b[1m> Finished AgentExecutor chain.\u001b[0m\n"
"\u001b[1m> Finished AgentWithTools chain.\u001b[0m\n"
]
},
{
@ -38,7 +38,7 @@
"'El Palmar, Spain'"
]
},
"execution_count": 2,
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
@ -59,11 +59,19 @@
"self_ask_with_search = initialize_agent(tools, llm, agent=\"self-ask-with-search\", verbose=True)\n",
"self_ask_with_search.run(\"What is the hometown of the reigning men's U.S. Open champion?\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "683d69e7",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3.9.0 64-bit ('llm-env')",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
@ -77,12 +85,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.0"
},
"vscode": {
"interpreter": {
"hash": "b1677b440931f40d89ef8be7bf03acb108ce003de0ac9b18e8d43753ea2e7103"
}
"version": "3.10.8"
}
},
"nbformat": 4,

@ -0,0 +1,88 @@
Chains
======
The examples here are all end-to-end chains for specific applications.
A chain is made up of links, which can be either primitives or other chains.
The following primitives exist as options to use for links:
#. `LLM: <../modules/llms.rst>`_ A language model takes text as input and outputs text.
#. `PromptTemplate: <../modules/prompt.rst>`_ A prompt template takes arbitrary string inputs and returns a final formatted string.
#. `TextSplitter: <../modules/text_splitter.rst>`_ A text splitter takes a longer document and splits it into smaller chunks.
#. `Python REPL: <../modules/python.rst>`_ A Python REPL takes a string representing a Python command to run, runs that command, and then returns anything that was printed during that run.
#. `SQL Database: <../modules/sql_database.rst>`_ A SQL database takes a string representing a SQL command as input and executes that command against the database. If any rows are returned, then those are cast to a string and returned.
#. `Search: <../modules/serpapi.rst>`_ A search object takes a string as input and executes that against a search object, returning any results.
#. `Docstore: <../modules/docstore.rst>`_ A docstore object can be used to lookup a document in a database by exact match.
#. `Vectorstore: <../modules/vectorstore.rst>`_ A vectorstore object uses embeddings stored in a vector database to take in an input string and return documents similar to that string.
With these primitives in mind, the following chains exist:
**LLMChain**
- **Links Used**: PromptTemplate, LLM
- **Notes**: This chain is the simplest chain, and is widely used by almost every other chain. This chain takes arbitrary user input, creates a prompt with it from the PromptTemplate, passes that to the LLM, and then returns the output of the LLM as the final output.
- `Example Notebook <chains/llm_chain.ipynb>`_
**LLMMath**
- **Links Used**: Python REPL, LLMChain
- **Notes**: This chain takes user input (a math question), uses an LLMChain to convert it to python code snippet to run in the Python REPL, and then returns that as the result.
- `Example Notebook <chains/llm_math.ipynb>`_
**PAL**
- **Links Used**: Python REPL, LLMChain
- **Notes**: This chain takes user input (a reasoning question), uses an LLMChain to convert it to python code snippet to run in the Python REPL, and then returns that as the result.
- `Paper <https://arxiv.org/abs/2211.10435>`_
- `Example Notebook <chains/pal.ipynb>`_
**Recursive Summarization**
- **Links Used**: TextSplitter, LLMChain
- **Notes**: This chain splits a document into chunks, runs a first LLMChain over each chunk to summarize it, and then runs a second LLMChain over those results to get a summary of the summaries.
- `Example Notebook <chains/map_reduce.ipynb>`_
**SQLDatabase Chain**
- **Links Used**: SQLDatabase, LLMChain
- **Notes**: This chain takes user input (a question), uses a first LLM chain to construct a SQL query to run against the SQL database, and then uses another LLMChain to take the results of that query and use it to answer the original question.
- `Example Notebook <chains/sqlite.ipynb>`_
**Vector Database Question-Answering**
- **Links Used**: Vectorstore, LLMChain
- **Notes**: This chain takes user input (a question), uses the Vectorstore and semantic search to find relevant documents, and then passes the documents plus the original question to another LLM to generate a final answer.
- `Example Notebook <chains/vector_db_qa.ipynb>`_
**Vector Database Question-Answering With Sources**
- **Links Used**: Vectorstore, LLMChain
- **Notes**: This chain takes user input (a question), uses the Vectorstore and semantic search to find relevant documents, and then passes the documents plus the original question to another LLM to generate a final answer with sources.
- `Example Notebook <chains/vector_db_qa_with_sources.ipynb>`_
**Question-Answering With Sources**
- **Links Used**: LLMChain
- **Notes**: These types of chains take a question and multiple documents as input, and return an answer plus sources for where that answer came from. There are multiple underlying types of chains to do this, for more information see TODO.
- `Example Notebook <chains/qa_with_sources.ipynb>`_
**Question-Answering**
- **Links Used**: LLMChain
- **Notes**: These types of chains take a question and multiple documents as input, and return an answer. There are multiple underlying types of chains to do this, for more information see TODO.
- `Example Notebook <chains/question_answering.ipynb>`_
**Summarization**
- **Links Used**: LLMChain
- **Notes**: These types of chains take multiple documents as input, and return a summary of all documents. There are multiple underlying types of chains to do this, for more information see TODO.
- `Example Notebook <chains/summarize.ipynb>`_
.. toctree::
:maxdepth: 1
:glob:
:caption: Chains
:hidden:
chains/*

@ -14,7 +14,7 @@
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": 38,
"id": "a99acd89",
"metadata": {},
"outputs": [
@ -35,7 +35,7 @@
"Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.\n",
"\n",
"\n",
"Human: I want you to act as a Linux terminal. I will type commands and you will reply with what the terminal should show. I want you to only reply with the terminal output inside one unique code block, and nothing else. Do not write explanations. Do not type commands unless I instruct you to do so. When I need to tell you something in English I will do so by putting text inside curly brackets {like this}. My first command is pwd.\n",
"Human: I want you to act as a Linux terminal. I will type commands and you will reply with what the terminal should show. I want you to only reply wiht the terminal output inside one unique code block, and nothing else. Do not write explanations. Do not type commands unless I instruct you to do so. When I need to tell you something in English I will do so by putting text inside curly brackets {like this}. My first command is pwd.\n",
"Assistant:\u001b[0m\n",
"\n",
"\u001b[1m> Finished LLMChain chain.\u001b[0m\n",
@ -77,13 +77,13 @@
" memory=ConversationalBufferWindowMemory(k=2),\n",
")\n",
"\n",
"output = chatgpt_chain.predict(human_input=\"I want you to act as a Linux terminal. I will type commands and you will reply with what the terminal should show. I want you to only reply with the terminal output inside one unique code block, and nothing else. Do not write explanations. Do not type commands unless I instruct you to do so. When I need to tell you something in English I will do so by putting text inside curly brackets {like this}. My first command is pwd.\")\n",
"output = chatgpt_chain.predict(human_input=\"I want you to act as a Linux terminal. I will type commands and you will reply with what the terminal should show. I want you to only reply wiht the terminal output inside one unique code block, and nothing else. Do not write explanations. Do not type commands unless I instruct you to do so. When I need to tell you something in English I will do so by putting text inside curly brackets {like this}. My first command is pwd.\")\n",
"print(output)"
]
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 39,
"id": "4ef711d6",
"metadata": {},
"outputs": [
@ -103,7 +103,7 @@
"\n",
"Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.\n",
"\n",
"Human: I want you to act as a Linux terminal. I will type commands and you will reply with what the terminal should show. I want you to only reply with the terminal output inside one unique code block, and nothing else. Do not write explanations. Do not type commands unless I instruct you to do so. When I need to tell you something in English I will do so by putting text inside curly brackets {like this}. My first command is pwd.\n",
"Human: I want you to act as a Linux terminal. I will type commands and you will reply with what the terminal should show. I want you to only reply wiht the terminal output inside one unique code block, and nothing else. Do not write explanations. Do not type commands unless I instruct you to do so. When I need to tell you something in English I will do so by putting text inside curly brackets {like this}. My first command is pwd.\n",
"AI: \n",
"```\n",
"$ pwd\n",
@ -128,7 +128,7 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 40,
"id": "a5d6dac2",
"metadata": {},
"outputs": [
@ -148,7 +148,7 @@
"\n",
"Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.\n",
"\n",
"Human: I want you to act as a Linux terminal. I will type commands and you will reply with what the terminal should show. I want you to only reply with the terminal output inside one unique code block, and nothing else. Do not write explanations. Do not type commands unless I instruct you to do so. When I need to tell you something in English I will do so by putting text inside curly brackets {like this}. My first command is pwd.\n",
"Human: I want you to act as a Linux terminal. I will type commands and you will reply with what the terminal should show. I want you to only reply wiht the terminal output inside one unique code block, and nothing else. Do not write explanations. Do not type commands unless I instruct you to do so. When I need to tell you something in English I will do so by putting text inside curly brackets {like this}. My first command is pwd.\n",
"AI: \n",
"```\n",
"$ pwd\n",
@ -180,7 +180,7 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 41,
"id": "b9283077",
"metadata": {},
"outputs": [
@ -235,7 +235,7 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 42,
"id": "570e785e",
"metadata": {},
"outputs": [
@ -292,7 +292,7 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 43,
"id": "cd0a23d9",
"metadata": {
"scrolled": true
@ -352,7 +352,7 @@
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": 44,
"id": "90db6eb2",
"metadata": {},
"outputs": [
@ -416,7 +416,7 @@
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": 45,
"id": "c3806f89",
"metadata": {},
"outputs": [
@ -492,7 +492,7 @@
},
{
"cell_type": "code",
"execution_count": 9,
"execution_count": 46,
"id": "f508f597",
"metadata": {},
"outputs": [
@ -574,7 +574,7 @@
},
{
"cell_type": "code",
"execution_count": 10,
"execution_count": 47,
"id": "cbd607f4",
"metadata": {},
"outputs": [
@ -649,7 +649,7 @@
},
{
"cell_type": "code",
"execution_count": 11,
"execution_count": 48,
"id": "d33e0e28",
"metadata": {},
"outputs": [
@ -716,7 +716,7 @@
},
{
"cell_type": "code",
"execution_count": 12,
"execution_count": 49,
"id": "57c2f113",
"metadata": {},
"outputs": [
@ -789,7 +789,7 @@
},
{
"cell_type": "code",
"execution_count": 13,
"execution_count": 50,
"id": "babadc78",
"metadata": {},
"outputs": [
@ -865,7 +865,7 @@
},
{
"cell_type": "code",
"execution_count": 14,
"execution_count": 51,
"id": "0954792a",
"metadata": {},
"outputs": [
@ -915,24 +915,26 @@
" \"response\": \"Artificial intelligence (AI) is the simulation of human intelligence processes by machines, especially computer systems. These processes include learning (the acquisition of information and rules for using the information), reasoning (using the rules to reach approximate or definite conclusions) and self-correction. AI is used to develop computer systems that can think and act like humans.\"\n",
"}\n",
"```\n",
"Human: curl --header \"Content-Type:application/json\" --request POST --data '{\"message\": \"I want you to act as a Linux terminal. I will type commands and you will reply with what the terminal should show. I want you to only reply with the terminal output inside one unique code block, and nothing else. Do not write explanations. Do not type commands unless I instruct you to do so. When I need to tell you something in English I will do so by putting text inside curly brackets {like this}. My first command is pwd.\"}' https://chat.openai.com/chat\n",
"Human: curl --header \"Content-Type:application/json\" --request POST --data '{\"message\": \"I want you to act as a Linux terminal. I will type commands and you will reply with what the terminal should show. I want you to only reply wiht the terminal output inside one unique code block, and nothing else. Do not write explanations. Do not type commands unless I instruct you to do so. When I need to tell you something in English I will do so by putting text inside curly brackets {like this}. My first command is pwd.\"}' https://chat.openai.com/chat\n",
"Assistant:\u001b[0m\n",
"\n",
"\u001b[1m> Finished LLMChain chain.\u001b[0m\n",
" \n",
"\n",
"```\n",
"$ curl --header \"Content-Type:application/json\" --request POST --data '{\"message\": \"I want you to act as a Linux terminal. I will type commands and you will reply with what the terminal should show. I want you to only reply with the terminal output inside one unique code block, and nothing else. Do not write explanations. Do not type commands unless I instruct you to do so. When I need to tell you something in English I will do so by putting text inside curly brackets {like this}. My first command is pwd.\"}' https://chat.openai.com/chat\n",
"$ curl --header \"Content-Type:application/json\" --request POST --data '{\"message\": \"I want you to act as a Linux terminal. I will type commands and you will reply with what the terminal should show. I want you to only reply wiht the terminal output inside one unique code block, and nothing else. Do not write explanations. Do not type commands unless I instruct you to do so. When I need to tell you something in English I will do so by putting text inside curly brackets {like this}. My first command is pwd.\"}' https://chat.openai.com/chat\n",
"\n",
"{\n",
" \"response\": \"```\\n/current/working/directory\\n```\"\n",
" \"response\": \"```\n",
"/home/user\n",
"```\"\n",
"}\n",
"```\n"
]
}
],
"source": [
"output = chatgpt_chain.predict(human_input=\"\"\"curl --header \"Content-Type:application/json\" --request POST --data '{\"message\": \"I want you to act as a Linux terminal. I will type commands and you will reply with what the terminal should show. I want you to only reply with the terminal output inside one unique code block, and nothing else. Do not write explanations. Do not type commands unless I instruct you to do so. When I need to tell you something in English I will do so by putting text inside curly brackets {like this}. My first command is pwd.\"}' https://chat.openai.com/chat\"\"\")\n",
"output = chatgpt_chain.predict(human_input=\"\"\"curl --header \"Content-Type:application/json\" --request POST --data '{\"message\": \"I want you to act as a Linux terminal. I will type commands and you will reply with what the terminal should show. I want you to only reply wiht the terminal output inside one unique code block, and nothing else. Do not write explanations. Do not type commands unless I instruct you to do so. When I need to tell you something in English I will do so by putting text inside curly brackets {like this}. My first command is pwd.\"}' https://chat.openai.com/chat\"\"\")\n",
"print(output)"
]
},
@ -961,7 +963,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.9"
"version": "3.10.4"
}
},
"nbformat": 4,

@ -0,0 +1,87 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# BashChain\n",
"This notebook showcases using LLMs and a bash process to do perform simple filesystem commands."
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new LLMBashChain chain...\u001b[0m\n",
"Please write a bash script that prints 'Hello World' to the console.\u001b[32;1m\u001b[1;3m\n",
"\n",
"```bash\n",
"echo \"Hello World\"\n",
"```\u001b[0m['```bash', 'echo \"Hello World\"', '```']\n",
"\n",
"Answer: \u001b[33;1m\u001b[1;3mHello World\n",
"\u001b[0m\n",
"\u001b[1m> Finished LLMBashChain chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'Hello World\\n'"
]
},
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain.chains import LLMBashChain\n",
"from langchain.llms import OpenAI\n",
"\n",
"llm = OpenAI(temperature=0)\n",
"\n",
"text = \"Please write a bash script that prints 'Hello World' to the console.\"\n",
"\n",
"bash_chain = LLMBashChain(llm=llm, verbose=True)\n",
"\n",
"bash_chain.run(text)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.8"
}
},
"nbformat": 4,
"nbformat_minor": 4
}

@ -25,14 +25,14 @@
"id": "06bcb078",
"metadata": {},
"source": [
"## Single Input\n",
"### Single Input\n",
"\n",
"First, lets go over an example using a single input"
]
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 3,
"id": "51a54c4d",
"metadata": {},
"outputs": [
@ -57,7 +57,7 @@
"' Justin Bieber was born in 1994, so the NFL team that won the Super Bowl in 1994 was the Dallas Cowboys.'"
]
},
"execution_count": 2,
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
@ -79,13 +79,13 @@
"id": "79c3ec4d",
"metadata": {},
"source": [
"## Multiple Inputs\n",
"### Multiple Inputs\n",
"Now lets go over an example using multiple inputs."
]
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 5,
"id": "03dd6918",
"metadata": {},
"outputs": [
@ -108,7 +108,7 @@
"\"\\n\\nThe ducks swim in the pond,\\nTheir feathers so soft and warm,\\nBut they can't help but feel so forlorn.\\n\\nTheir quacks echo in the air,\\nBut no one is there to hear,\\nFor they have no one to share.\\n\\nThe ducks paddle around in circles,\\nTheir heads hung low in despair,\\nFor they have no one to care.\\n\\nThe ducks look up to the sky,\\nBut no one is there to see,\\nFor they have no one to be.\\n\\nThe ducks drift away in the night,\\nTheir hearts filled with sorrow and pain,\\nFor they have no one to gain.\""
]
},
"execution_count": 3,
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
@ -121,51 +121,10 @@
"llm_chain.predict(adjective=\"sad\", subject=\"ducks\")"
]
},
{
"cell_type": "markdown",
"id": "672f59d4",
"metadata": {},
"source": [
"## From string\n",
"You can also construct an LLMChain from a string template directly."
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "f8bc262e",
"metadata": {},
"outputs": [],
"source": [
"template = \"\"\"Write a {adjective} poem about {subject}.\"\"\"\n",
"llm_chain = LLMChain.from_string(llm=OpenAI(temperature=0), template=template)\n"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "cb164a76",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"\"\\n\\nThe ducks swim in the pond,\\nTheir feathers so soft and warm,\\nBut they can't help but feel so forlorn.\\n\\nTheir quacks echo in the air,\\nBut no one is there to hear,\\nFor they have no one to share.\\n\\nThe ducks paddle around in circles,\\nTheir heads hung low in despair,\\nFor they have no one to care.\\n\\nThe ducks look up to the sky,\\nBut no one is there to see,\\nFor they have no one to be.\\n\\nThe ducks drift away in the night,\\nTheir hearts filled with sorrow and pain,\\nFor they have no one to gain.\""
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"llm_chain.predict(adjective=\"sad\", subject=\"ducks\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "9f0adbc7",
"id": "8310cdaa",
"metadata": {},
"outputs": [],
"source": []
@ -187,7 +146,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.9"
"version": "3.10.8"
}
},
"nbformat": 4,

@ -24,16 +24,16 @@
"\n",
"\u001b[1m> Entering new SequentialChain chain...\u001b[0m\n",
"\u001b[1mChain 0\u001b[0m:\n",
"{'statement': '\\nNone. Mammals do not lay eggs.'}\n",
"{'statement': '\\nThe largest mammal that lays eggs is the platypus.'}\n",
"\n",
"\u001b[1mChain 1\u001b[0m:\n",
"{'assertions': '\\n• Mammals reproduce using live birth\\n• Mammals do not lay eggs\\n• Animals that lay eggs are not mammals'}\n",
"{'assertions': '\\n• The largest mammal is the platypus.\\n• The platypus lays eggs.\\n• There is no larger mammal than the platypus that lays eggs.'}\n",
"\n",
"\u001b[1mChain 2\u001b[0m:\n",
"{'checked_assertions': '\\n1. True\\n\\n2. True\\n\\n3. False - Mammals are a class of animals that includes animals that lay eggs, such as monotremes (platypus and echidna).'}\n",
"{'checked_assertions': '\\n1. The largest mammal is the platypus. False. The blue whale is the largest mammal.\\n\\n2. The platypus lays eggs. True. The Platypus is one of only two mammals that lay eggs.\\n\\n3. There is no larger mammal than the platypus that lays eggs. False. The echidna is another mammal that lays eggs and is larger than the platypus.'}\n",
"\n",
"\u001b[1mChain 3\u001b[0m:\n",
"{'revised_statement': ' Monotremes, such as the platypus and echidna, lay the biggest eggs of any mammal.'}\n",
"{'revised_statement': ' The echidna is the type of mammal that lays the biggest eggs.'}\n",
"\n",
"\n",
"\u001b[1m> Finished SequentialChain chain.\u001b[0m\n",
@ -44,7 +44,7 @@
{
"data": {
"text/plain": [
"' Monotremes, such as the platypus and echidna, lay the biggest eggs of any mammal.'"
"' The echidna is the type of mammal that lays the biggest eggs.'"
]
},
"execution_count": 1,
@ -89,7 +89,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.9"
"version": "3.10.8"
}
},
"nbformat": 4,

@ -0,0 +1,91 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "e71e720f",
"metadata": {},
"source": [
"# LLM Math\n",
"\n",
"This notebook showcases using LLMs and Python REPLs to do complex word math problems."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "44e9ba31",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new chain...\u001b[0m\n",
"How many of the integers between 0 and 99 inclusive are divisible by 8?\u001b[102m\n",
"\n",
"```python\n",
"count = 0\n",
"for i in range(100):\n",
" if i % 8 == 0:\n",
" count += 1\n",
"print(count)\n",
"```\n",
"\u001b[0m\n",
"Answer: \u001b[103m13\n",
"\u001b[0m\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'Answer: 13\\n'"
]
},
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain import OpenAI, LLMMathChain\n",
"\n",
"llm = OpenAI(temperature=0)\n",
"llm_math = LLMMathChain(llm=llm, verbose=True)\n",
"\n",
"llm_math.run(\"How many of the integers between 0 and 99 inclusive are divisible by 8?\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f62f0c75",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.4"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

@ -69,7 +69,7 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 6,
"id": "2ea81168",
"metadata": {},
"outputs": [
@ -78,10 +78,10 @@
"text/plain": [
"{'query': 'What are the Three (3) biggest countries, and their respective sizes?',\n",
" 'url': 'https://www.google.com/search?q=What+are+the+Three+(3)+biggest+countries,+and+their+respective+sizes?',\n",
" 'output': ' Russia (17,098,242 km²), Canada (9,984,670 km²), United States (9,826,675 km²)'}"
" 'output': ' Russia (17,098,242 sq km), Canada (9,984,670 sq km), China (9,706,961 sq km)'}"
]
},
"execution_count": 5,
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
@ -115,7 +115,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.9"
"version": "3.10.8"
}
},
"nbformat": 4,

@ -18,7 +18,7 @@
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": 13,
"id": "b7aa1ff2",
"metadata": {},
"outputs": [],
@ -131,7 +131,7 @@
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": 8,
"id": "954f3da2",
"metadata": {},
"outputs": [
@ -142,11 +142,11 @@
"traceback": [
"\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
"\u001b[0;31mValueError\u001b[0m Traceback (most recent call last)",
"Cell \u001b[0;32mIn[7], line 1\u001b[0m\n\u001b[0;32m----> 1\u001b[0m \u001b[43mmoderation_chain_error\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mrun\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mI will kill you\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m)\u001b[49m\n",
"File \u001b[0;32m~/workplace/langchain/langchain/chains/base.py:138\u001b[0m, in \u001b[0;36mChain.run\u001b[0;34m(self, *args, **kwargs)\u001b[0m\n\u001b[1;32m 136\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;28mlen\u001b[39m(args) \u001b[38;5;241m!=\u001b[39m \u001b[38;5;241m1\u001b[39m:\n\u001b[1;32m 137\u001b[0m \u001b[38;5;28;01mraise\u001b[39;00m \u001b[38;5;167;01mValueError\u001b[39;00m(\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124m`run` supports only one positional argument.\u001b[39m\u001b[38;5;124m\"\u001b[39m)\n\u001b[0;32m--> 138\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28;43mself\u001b[39;49m\u001b[43m(\u001b[49m\u001b[43margs\u001b[49m\u001b[43m[\u001b[49m\u001b[38;5;241;43m0\u001b[39;49m\u001b[43m]\u001b[49m\u001b[43m)\u001b[49m[\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39moutput_keys[\u001b[38;5;241m0\u001b[39m]]\n\u001b[1;32m 140\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m kwargs \u001b[38;5;129;01mand\u001b[39;00m \u001b[38;5;129;01mnot\u001b[39;00m args:\n\u001b[1;32m 141\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28mself\u001b[39m(kwargs)[\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39moutput_keys[\u001b[38;5;241m0\u001b[39m]]\n",
"File \u001b[0;32m~/workplace/langchain/langchain/chains/base.py:112\u001b[0m, in \u001b[0;36mChain.__call__\u001b[0;34m(self, inputs, return_only_outputs)\u001b[0m\n\u001b[1;32m 108\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mverbose:\n\u001b[1;32m 109\u001b[0m \u001b[38;5;28mprint\u001b[39m(\n\u001b[1;32m 110\u001b[0m \u001b[38;5;124mf\u001b[39m\u001b[38;5;124m\"\u001b[39m\u001b[38;5;130;01m\\n\u001b[39;00m\u001b[38;5;130;01m\\n\u001b[39;00m\u001b[38;5;130;01m\\033\u001b[39;00m\u001b[38;5;124m[1m> Entering new \u001b[39m\u001b[38;5;132;01m{\u001b[39;00m\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m\u001b[38;5;18m__class__\u001b[39m\u001b[38;5;241m.\u001b[39m\u001b[38;5;18m__name__\u001b[39m\u001b[38;5;132;01m}\u001b[39;00m\u001b[38;5;124m chain...\u001b[39m\u001b[38;5;130;01m\\033\u001b[39;00m\u001b[38;5;124m[0m\u001b[39m\u001b[38;5;124m\"\u001b[39m\n\u001b[1;32m 111\u001b[0m )\n\u001b[0;32m--> 112\u001b[0m outputs \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_call\u001b[49m\u001b[43m(\u001b[49m\u001b[43minputs\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 113\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mverbose:\n\u001b[1;32m 114\u001b[0m \u001b[38;5;28mprint\u001b[39m(\u001b[38;5;124mf\u001b[39m\u001b[38;5;124m\"\u001b[39m\u001b[38;5;130;01m\\n\u001b[39;00m\u001b[38;5;130;01m\\033\u001b[39;00m\u001b[38;5;124m[1m> Finished \u001b[39m\u001b[38;5;132;01m{\u001b[39;00m\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m\u001b[38;5;18m__class__\u001b[39m\u001b[38;5;241m.\u001b[39m\u001b[38;5;18m__name__\u001b[39m\u001b[38;5;132;01m}\u001b[39;00m\u001b[38;5;124m chain.\u001b[39m\u001b[38;5;130;01m\\033\u001b[39;00m\u001b[38;5;124m[0m\u001b[39m\u001b[38;5;124m\"\u001b[39m)\n",
"File \u001b[0;32m~/workplace/langchain/langchain/chains/moderation.py:81\u001b[0m, in \u001b[0;36mOpenAIModerationChain._call\u001b[0;34m(self, inputs)\u001b[0m\n\u001b[1;32m 79\u001b[0m text \u001b[38;5;241m=\u001b[39m inputs[\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39minput_key]\n\u001b[1;32m 80\u001b[0m results \u001b[38;5;241m=\u001b[39m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mclient\u001b[38;5;241m.\u001b[39mcreate(text)\n\u001b[0;32m---> 81\u001b[0m output \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_moderate\u001b[49m\u001b[43m(\u001b[49m\u001b[43mtext\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mresults\u001b[49m\u001b[43m[\u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mresults\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m]\u001b[49m\u001b[43m[\u001b[49m\u001b[38;5;241;43m0\u001b[39;49m\u001b[43m]\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 82\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m {\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39moutput_key: output}\n",
"File \u001b[0;32m~/workplace/langchain/langchain/chains/moderation.py:73\u001b[0m, in \u001b[0;36mOpenAIModerationChain._moderate\u001b[0;34m(self, text, results)\u001b[0m\n\u001b[1;32m 71\u001b[0m error_str \u001b[38;5;241m=\u001b[39m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mText was found that violates OpenAI\u001b[39m\u001b[38;5;124m'\u001b[39m\u001b[38;5;124ms content policy.\u001b[39m\u001b[38;5;124m\"\u001b[39m\n\u001b[1;32m 72\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39merror:\n\u001b[0;32m---> 73\u001b[0m \u001b[38;5;28;01mraise\u001b[39;00m \u001b[38;5;167;01mValueError\u001b[39;00m(error_str)\n\u001b[1;32m 74\u001b[0m \u001b[38;5;28;01melse\u001b[39;00m:\n\u001b[1;32m 75\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m error_str\n",
"Cell \u001b[0;32mIn[8], line 1\u001b[0m\n\u001b[0;32m----> 1\u001b[0m \u001b[43mmoderation_chain_error\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mrun\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mI will kill you\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m)\u001b[49m\n",
"File \u001b[0;32m~/workplace/third_party/langchain/langchain/chains/base.py:114\u001b[0m, in \u001b[0;36mChain.run\u001b[0;34m(self, text)\u001b[0m\n\u001b[1;32m 109\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;28mlen\u001b[39m(\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39moutput_keys) \u001b[38;5;241m!=\u001b[39m \u001b[38;5;241m1\u001b[39m:\n\u001b[1;32m 110\u001b[0m \u001b[38;5;28;01mraise\u001b[39;00m \u001b[38;5;167;01mValueError\u001b[39;00m(\n\u001b[1;32m 111\u001b[0m \u001b[38;5;124mf\u001b[39m\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124m`run` not supported when there is not exactly \u001b[39m\u001b[38;5;124m\"\u001b[39m\n\u001b[1;32m 112\u001b[0m \u001b[38;5;124mf\u001b[39m\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mone output key, got \u001b[39m\u001b[38;5;132;01m{\u001b[39;00m\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39moutput_keys\u001b[38;5;132;01m}\u001b[39;00m\u001b[38;5;124m.\u001b[39m\u001b[38;5;124m\"\u001b[39m\n\u001b[1;32m 113\u001b[0m )\n\u001b[0;32m--> 114\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28;43mself\u001b[39;49m\u001b[43m(\u001b[49m\u001b[43m{\u001b[49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43minput_keys\u001b[49m\u001b[43m[\u001b[49m\u001b[38;5;241;43m0\u001b[39;49m\u001b[43m]\u001b[49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43mtext\u001b[49m\u001b[43m}\u001b[49m\u001b[43m)\u001b[49m[\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39moutput_keys[\u001b[38;5;241m0\u001b[39m]]\n",
"File \u001b[0;32m~/workplace/third_party/langchain/langchain/chains/base.py:87\u001b[0m, in \u001b[0;36mChain.__call__\u001b[0;34m(self, inputs, return_only_outputs)\u001b[0m\n\u001b[1;32m 83\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mverbose:\n\u001b[1;32m 84\u001b[0m \u001b[38;5;28mprint\u001b[39m(\n\u001b[1;32m 85\u001b[0m \u001b[38;5;124mf\u001b[39m\u001b[38;5;124m\"\u001b[39m\u001b[38;5;130;01m\\n\u001b[39;00m\u001b[38;5;130;01m\\n\u001b[39;00m\u001b[38;5;130;01m\\033\u001b[39;00m\u001b[38;5;124m[1m> Entering new \u001b[39m\u001b[38;5;132;01m{\u001b[39;00m\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m\u001b[38;5;18m__class__\u001b[39m\u001b[38;5;241m.\u001b[39m\u001b[38;5;18m__name__\u001b[39m\u001b[38;5;132;01m}\u001b[39;00m\u001b[38;5;124m chain...\u001b[39m\u001b[38;5;130;01m\\033\u001b[39;00m\u001b[38;5;124m[0m\u001b[39m\u001b[38;5;124m\"\u001b[39m\n\u001b[1;32m 86\u001b[0m )\n\u001b[0;32m---> 87\u001b[0m outputs \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_call\u001b[49m\u001b[43m(\u001b[49m\u001b[43minputs\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 88\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mverbose:\n\u001b[1;32m 89\u001b[0m \u001b[38;5;28mprint\u001b[39m(\u001b[38;5;124mf\u001b[39m\u001b[38;5;124m\"\u001b[39m\u001b[38;5;130;01m\\n\u001b[39;00m\u001b[38;5;130;01m\\033\u001b[39;00m\u001b[38;5;124m[1m> Finished \u001b[39m\u001b[38;5;132;01m{\u001b[39;00m\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m\u001b[38;5;18m__class__\u001b[39m\u001b[38;5;241m.\u001b[39m\u001b[38;5;18m__name__\u001b[39m\u001b[38;5;132;01m}\u001b[39;00m\u001b[38;5;124m chain.\u001b[39m\u001b[38;5;130;01m\\033\u001b[39;00m\u001b[38;5;124m[0m\u001b[39m\u001b[38;5;124m\"\u001b[39m)\n",
"File \u001b[0;32m~/workplace/third_party/langchain/langchain/chains/moderation.py:79\u001b[0m, in \u001b[0;36mOpenAIModerationChain._call\u001b[0;34m(self, inputs)\u001b[0m\n\u001b[1;32m 77\u001b[0m text \u001b[38;5;241m=\u001b[39m inputs[\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39minput_key]\n\u001b[1;32m 78\u001b[0m results \u001b[38;5;241m=\u001b[39m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mclient\u001b[38;5;241m.\u001b[39mcreate(text)\n\u001b[0;32m---> 79\u001b[0m output \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_moderate\u001b[49m\u001b[43m(\u001b[49m\u001b[43mtext\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mresults\u001b[49m\u001b[43m[\u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mresults\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m]\u001b[49m\u001b[43m[\u001b[49m\u001b[38;5;241;43m0\u001b[39;49m\u001b[43m]\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 80\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m {\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39moutput_key: output}\n",
"File \u001b[0;32m~/workplace/third_party/langchain/langchain/chains/moderation.py:71\u001b[0m, in \u001b[0;36mOpenAIModerationChain._moderate\u001b[0;34m(self, text, results)\u001b[0m\n\u001b[1;32m 69\u001b[0m error_str \u001b[38;5;241m=\u001b[39m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mText was found that violates OpenAI\u001b[39m\u001b[38;5;124m'\u001b[39m\u001b[38;5;124ms content policy.\u001b[39m\u001b[38;5;124m\"\u001b[39m\n\u001b[1;32m 70\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39merror:\n\u001b[0;32m---> 71\u001b[0m \u001b[38;5;28;01mraise\u001b[39;00m \u001b[38;5;167;01mValueError\u001b[39;00m(error_str)\n\u001b[1;32m 72\u001b[0m \u001b[38;5;28;01melse\u001b[39;00m:\n\u001b[1;32m 73\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m error_str\n",
"\u001b[0;31mValueError\u001b[0m: Text was found that violates OpenAI's content policy."
]
}
@ -165,7 +165,7 @@
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": 10,
"id": "3960e985",
"metadata": {},
"outputs": [],
@ -183,7 +183,7 @@
},
{
"cell_type": "code",
"execution_count": 9,
"execution_count": 11,
"id": "1152ec11",
"metadata": {},
"outputs": [
@ -193,7 +193,7 @@
"'This is okay'"
]
},
"execution_count": 9,
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
@ -204,7 +204,7 @@
},
{
"cell_type": "code",
"execution_count": 10,
"execution_count": 12,
"id": "973257bf",
"metadata": {},
"outputs": [
@ -214,7 +214,7 @@
"\"The following text was found that violates OpenAI's content policy: I will kill you\""
]
},
"execution_count": 10,
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
@ -237,7 +237,7 @@
},
{
"cell_type": "code",
"execution_count": 11,
"execution_count": 17,
"id": "0d129333",
"metadata": {},
"outputs": [],
@ -248,7 +248,7 @@
},
{
"cell_type": "code",
"execution_count": 12,
"execution_count": 18,
"id": "a557c531",
"metadata": {},
"outputs": [
@ -258,7 +258,7 @@
"' I will kill you'"
]
},
"execution_count": 12,
"execution_count": 18,
"metadata": {},
"output_type": "execute_result"
}
@ -279,7 +279,7 @@
},
{
"cell_type": "code",
"execution_count": 13,
"execution_count": 19,
"id": "d4d10f1c",
"metadata": {},
"outputs": [],
@ -289,7 +289,7 @@
},
{
"cell_type": "code",
"execution_count": 14,
"execution_count": 20,
"id": "02f37985",
"metadata": {},
"outputs": [
@ -299,7 +299,7 @@
"\"Text was found that violates OpenAI's content policy.\""
]
},
"execution_count": 14,
"execution_count": 20,
"metadata": {},
"output_type": "execute_result"
}
@ -318,7 +318,7 @@
},
{
"cell_type": "code",
"execution_count": 15,
"execution_count": 22,
"id": "7118ec36",
"metadata": {},
"outputs": [],
@ -329,7 +329,7 @@
},
{
"cell_type": "code",
"execution_count": 16,
"execution_count": 26,
"id": "003bdfce",
"metadata": {},
"outputs": [
@ -339,7 +339,7 @@
"{'text': ' I will kill you'}"
]
},
"execution_count": 16,
"execution_count": 26,
"metadata": {},
"output_type": "execute_result"
}
@ -361,7 +361,7 @@
},
{
"cell_type": "code",
"execution_count": 17,
"execution_count": 28,
"id": "77b64228",
"metadata": {},
"outputs": [],
@ -373,7 +373,7 @@
},
{
"cell_type": "code",
"execution_count": 18,
"execution_count": 31,
"id": "998a95be",
"metadata": {},
"outputs": [],
@ -383,7 +383,7 @@
},
{
"cell_type": "code",
"execution_count": 19,
"execution_count": 33,
"id": "9c97a136",
"metadata": {},
"outputs": [
@ -393,7 +393,7 @@
"{'sanitized_text': \"Text was found that violates OpenAI's content policy.\"}"
]
},
"execution_count": 19,
"execution_count": 33,
"metadata": {},
"output_type": "execute_result"
}
@ -427,7 +427,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.9"
"version": "3.9.1"
}
},
"nbformat": 4,

@ -21,24 +21,6 @@
"from langchain import OpenAI"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "9a58e15e",
"metadata": {},
"outputs": [],
"source": [
"llm = OpenAI(model_name='code-davinci-002', temperature=0, max_tokens=512)"
]
},
{
"cell_type": "markdown",
"id": "095adc76",
"metadata": {},
"source": [
"## Math Prompt"
]
},
{
"cell_type": "code",
"execution_count": 2,
@ -46,6 +28,7 @@
"metadata": {},
"outputs": [],
"source": [
"llm = OpenAI(model_name='code-davinci-002', temperature=0, max_tokens=512)\n",
"pal_chain = PALChain.from_math_prompt(llm, verbose=True)"
]
},
@ -71,7 +54,7 @@
"text": [
"\n",
"\n",
"\u001b[1m> Entering new PALChain chain...\u001b[0m\n",
"\u001b[1m> Entering new chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3mdef solution():\n",
" \"\"\"Jan has three times the number of pets as Marcia. Marcia has two more pets than Cindy. If Cindy has four pets, how many total pets do the three have?\"\"\"\n",
" cindy_pets = 4\n",
@ -99,14 +82,6 @@
"pal_chain.run(question)"
]
},
{
"cell_type": "markdown",
"id": "0269d20a",
"metadata": {},
"source": [
"## Colored Objects"
]
},
{
"cell_type": "code",
"execution_count": 5,
@ -114,6 +89,7 @@
"metadata": {},
"outputs": [],
"source": [
"llm = OpenAI(model_name='code-davinci-002', temperature=0, max_tokens=512)\n",
"pal_chain = PALChain.from_colored_object_prompt(llm, verbose=True)"
]
},
@ -139,7 +115,7 @@
"text": [
"\n",
"\n",
"\u001b[1m> Entering new PALChain chain...\u001b[0m\n",
"\u001b[1m> Entering new chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m# Put objects into a list to record ordering\n",
"objects = []\n",
"objects += [('booklet', 'blue')] * 2\n",
@ -153,7 +129,7 @@
"num_purple = len([object for object in objects if object[1] == 'purple'])\n",
"answer = num_purple\u001b[0m\n",
"\n",
"\u001b[1m> Finished PALChain chain.\u001b[0m\n"
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
@ -171,94 +147,10 @@
"pal_chain.run(question)"
]
},
{
"cell_type": "markdown",
"id": "fc3d7f10",
"metadata": {},
"source": [
"## Intermediate Steps\n",
"You can also use the intermediate steps flag to return the code executed that generates the answer."
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "9d2d9c61",
"metadata": {},
"outputs": [],
"source": [
"pal_chain = PALChain.from_colored_object_prompt(llm, verbose=True, return_intermediate_steps=True)"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "b29b971b",
"metadata": {},
"outputs": [],
"source": [
"question = \"On the desk, you see two blue booklets, two purple booklets, and two yellow pairs of sunglasses. If I remove all the pairs of sunglasses from the desk, how many purple items remain on it?\""
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "a2c40c28",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new PALChain chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m# Put objects into a list to record ordering\n",
"objects = []\n",
"objects += [('booklet', 'blue')] * 2\n",
"objects += [('booklet', 'purple')] * 2\n",
"objects += [('sunglasses', 'yellow')] * 2\n",
"\n",
"# Remove all pairs of sunglasses\n",
"objects = [object for object in objects if object[0] != 'sunglasses']\n",
"\n",
"# Count number of purple objects\n",
"num_purple = len([object for object in objects if object[1] == 'purple'])\n",
"answer = num_purple\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
}
],
"source": [
"result = pal_chain({\"question\": question})"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "efddd033",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"\"# Put objects into a list to record ordering\\nobjects = []\\nobjects += [('booklet', 'blue')] * 2\\nobjects += [('booklet', 'purple')] * 2\\nobjects += [('sunglasses', 'yellow')] * 2\\n\\n# Remove all pairs of sunglasses\\nobjects = [object for object in objects if object[0] != 'sunglasses']\\n\\n# Count number of purple objects\\nnum_purple = len([object for object in objects if object[1] == 'purple'])\\nanswer = num_purple\""
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"result['intermediate_steps']"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "dfd88594",
"id": "4ab20fec",
"metadata": {},
"outputs": [],
"source": []
@ -280,7 +172,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.9"
"version": "3.10.8"
}
},
"nbformat": 4,

@ -0,0 +1,258 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "74148cee",
"metadata": {},
"source": [
"# Question Answering with Sources\n",
"\n",
"This notebook walks through how to use LangChain for question answering with sources over a list of documents. It covers three different chain types: `stuff`, `map_reduce`, and `refine`. For a more in depth explanation of what these chain types are, see [here](../../explanation/combine_docs.md)."
]
},
{
"cell_type": "markdown",
"id": "ca2f0efc",
"metadata": {},
"source": [
"### Prepare Data\n",
"First we prepare the data. For this example we do similarity search over a vector database, but these documents could be fetched in any manner (the point of this notebook to highlight what to do AFTER you fetch the documents)."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "78f28130",
"metadata": {},
"outputs": [],
"source": [
"from langchain.embeddings.openai import OpenAIEmbeddings\n",
"from langchain.embeddings.cohere import CohereEmbeddings\n",
"from langchain.text_splitter import CharacterTextSplitter\n",
"from langchain.vectorstores.elastic_vector_search import ElasticVectorSearch\n",
"from langchain.vectorstores.faiss import FAISS\n",
"from langchain.docstore.document import Document"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "4da195a3",
"metadata": {},
"outputs": [],
"source": [
"with open('../state_of_the_union.txt') as f:\n",
" state_of_the_union = f.read()\n",
"text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\n",
"texts = text_splitter.split_text(state_of_the_union)\n",
"\n",
"embeddings = OpenAIEmbeddings()"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "5ec2b55b",
"metadata": {},
"outputs": [],
"source": [
"docsearch = FAISS.from_texts(texts, embeddings, metadatas=[{\"source\": i} for i in range(len(texts))])"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "5286f58f",
"metadata": {},
"outputs": [],
"source": [
"query = \"What did the president say about Justice Breyer\"\n",
"docs = docsearch.similarity_search(query)"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "005a47e9",
"metadata": {},
"outputs": [],
"source": [
"from langchain.chains.qa_with_sources import load_qa_with_sources_chain\n",
"from langchain.llms import OpenAI"
]
},
{
"cell_type": "markdown",
"id": "d82f899a",
"metadata": {},
"source": [
"### The `stuff` Chain\n",
"\n",
"This sections shows results of using the `stuff` Chain to do question answering with sources."
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "fc1a5ed6",
"metadata": {},
"outputs": [],
"source": [
"chain = load_qa_with_sources_chain(OpenAI(temperature=0), chain_type=\"stuff\")"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "e239964b",
"metadata": {},
"outputs": [],
"source": [
"docs = [Document(page_content=t, metadata={\"source\": i}) for i, t in enumerate(texts[:3])]"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "7d766417",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'output_text': ' The president did not mention Justice Breyer.\\nSOURCES: 0-pl, 1-pl, 2-pl'}"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"query = \"What did the president say about Justice Breyer\"\n",
"chain({\"input_documents\": docs, \"question\": query}, return_only_outputs=True)"
]
},
{
"cell_type": "markdown",
"id": "c5dbb304",
"metadata": {},
"source": [
"### The `map_reduce` Chain\n",
"\n",
"This sections shows results of using the `map_reduce` Chain to do question answering with sources."
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "921db0a4",
"metadata": {},
"outputs": [],
"source": [
"chain = load_qa_with_sources_chain(OpenAI(temperature=0), chain_type=\"map_reduce\")"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "e417926a",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.\n",
"Token indices sequence length is longer than the specified maximum sequence length for this model (1546 > 1024). Running this sequence through the model will result in indexing errors\n"
]
},
{
"data": {
"text/plain": [
"{'output_text': ' The president did not mention Justice Breyer.\\nSOURCES: 0, 1, 2'}"
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"query = \"What did the president say about Justice Breyer\"\n",
"chain({\"input_documents\": docs, \"question\": query}, return_only_outputs=True)"
]
},
{
"cell_type": "markdown",
"id": "5bf0e1ab",
"metadata": {},
"source": [
"### The `refine` Chain\n",
"\n",
"This sections shows results of using the `refine` Chain to do question answering with sources."
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "904835c8",
"metadata": {},
"outputs": [],
"source": [
"chain = load_qa_with_sources_chain(OpenAI(temperature=0), chain_type=\"refine\")"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "f60875c6",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'output_text': \"\\n\\nThe president did not mention Justice Breyer in his speech to the European Parliament, which focused on building a coalition of freedom-loving nations to confront Putin, unifying European allies, countering Russia's lies with truth, and enforcing powerful economic sanctions. Source: 2\"}"
]
},
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"query = \"What did the president say about Justice Breyer\"\n",
"chain({\"input_documents\": docs, \"question\": query}, return_only_outputs=True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "929620d0",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.8"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

@ -0,0 +1,248 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "05859721",
"metadata": {},
"source": [
"# Question Answering\n",
"\n",
"This notebook walks through how to use LangChain for question answering over a list of documents. It covers three different types of chaings: `stuff`, `map_reduce`, and `refine`. For a more in depth explanation of what these chain types are, see [here](../../explanation/combine_docs.md)."
]
},
{
"cell_type": "markdown",
"id": "726f4996",
"metadata": {},
"source": [
"### Prepare Data\n",
"First we prepare the data. For this example we do similarity search over a vector database, but these documents could be fetched in any manner (the point of this notebook to highlight what to do AFTER you fetch the documents)."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "17fcbc0f",
"metadata": {},
"outputs": [],
"source": [
"from langchain.embeddings.openai import OpenAIEmbeddings\n",
"from langchain.text_splitter import CharacterTextSplitter\n",
"from langchain.vectorstores.faiss import FAISS\n",
"from langchain.docstore.document import Document"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "291f0117",
"metadata": {},
"outputs": [],
"source": [
"with open('../state_of_the_union.txt') as f:\n",
" state_of_the_union = f.read()\n",
"text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\n",
"texts = text_splitter.split_text(state_of_the_union)\n",
"\n",
"embeddings = OpenAIEmbeddings()"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "fd9666a9",
"metadata": {},
"outputs": [],
"source": [
"docsearch = FAISS.from_texts(texts, embeddings)"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "d1eaf6e6",
"metadata": {},
"outputs": [],
"source": [
"query = \"What did the president say about Justice Breyer\"\n",
"docs = docsearch.similarity_search(query)"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "a16e3453",
"metadata": {},
"outputs": [],
"source": [
"from langchain.chains.question_answering import load_qa_chain\n",
"from langchain.llms import OpenAI"
]
},
{
"cell_type": "markdown",
"id": "f78787a0",
"metadata": {},
"source": [
"### The `stuff` Chain\n",
"\n",
"This sections shows results of using the `stuff` Chain to do question answering."
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "180fd4c1",
"metadata": {},
"outputs": [],
"source": [
"chain = load_qa_chain(OpenAI(temperature=0), chain_type=\"stuff\")"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "d145ae31",
"metadata": {},
"outputs": [],
"source": [
"docs = [Document(page_content=t) for t in texts[:3]]"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "77fdf1aa",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'output_text': ' The president did not mention Justice Breyer.'}"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"query = \"What did the president say about Justice Breyer\"\n",
"chain({\"input_documents\": docs, \"question\": query}, return_only_outputs=True)"
]
},
{
"cell_type": "markdown",
"id": "91522e29",
"metadata": {},
"source": [
"### The `map_reduce` Chain\n",
"\n",
"This sections shows results of using the `map_reduce` Chain to do question answering."
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "b0060f51",
"metadata": {},
"outputs": [],
"source": [
"chain = load_qa_chain(OpenAI(temperature=0), chain_type=\"map_reduce\")"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "fbdb9137",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'output_text': ' The president did not mention Justice Breyer.'}"
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"query = \"What did the president say about Justice Breyer\"\n",
"chain({\"input_documents\": docs, \"question\": query}, return_only_outputs=True)"
]
},
{
"cell_type": "markdown",
"id": "6ea50ad0",
"metadata": {},
"source": [
"### The `refine` Chain\n",
"\n",
"This sections shows results of using the `refine` Chain to do question answering."
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "fb167057",
"metadata": {},
"outputs": [],
"source": [
"chain = load_qa_chain(OpenAI(temperature=0), chain_type=\"refine\")"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "d8b5286e",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'output_text': \"\\n\\nThe president did not mention Justice Breyer in his speech to the European Parliament about building a coalition of freedom-loving nations to confront Putin, unifying European allies, countering Russia's lies with truth, and enforcing powerful economic sanctions.\"}"
]
},
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"query = \"What did the president say about Justice Breyer\"\n",
"chain({\"input_documents\": docs, \"question\": query}, return_only_outputs=True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "49e9c6d7",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.8"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

@ -0,0 +1,129 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "0ed6aab1",
"metadata": {
"pycharm": {
"name": "#%% md\n"
}
},
"source": [
"# SQLite example\n",
"\n",
"This example showcases hooking up an LLM to answer questions over a database."
]
},
{
"cell_type": "markdown",
"id": "b2f66479",
"metadata": {
"pycharm": {
"name": "#%% md\n"
}
},
"source": [
"This uses the example Chinook database.\n",
"To set it up follow the instructions on https://database.guide/2-sample-databases-sqlite/, placing the `.db` file in a notebooks folder at the root of this repository."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "d0e27d88",
"metadata": {
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [],
"source": [
"from langchain import OpenAI, SQLDatabase, SQLDatabaseChain"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "72ede462",
"metadata": {
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [],
"source": [
"db = SQLDatabase.from_uri(\"sqlite:///../../../notebooks/Chinook.db\")\n",
"llm = OpenAI(temperature=0)\n",
"db_chain = SQLDatabaseChain(llm=llm, database=db, verbose=True)"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "15ff81df",
"metadata": {
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new chain...\u001b[0m\n",
"How many employees are there?\n",
"SQLQuery:\u001b[102m SELECT COUNT(*) FROM Employee\u001b[0m\n",
"SQLResult: \u001b[103m[(8,)]\u001b[0m\n",
"Answer:\u001b[102m 8\u001b[0m\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"' 8'"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"db_chain.run(\"How many employees are there?\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "61d91b85",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.6"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

@ -0,0 +1,234 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "d9a0131f",
"metadata": {},
"source": [
"# Summarization\n",
"\n",
"This notebook walks through how to use LangChain for summarization over a list of documents. It covers three different chain types: `stuff`, `map_reduce`, and `refine`. For a more in depth explanation of what these chain types are, see [here](../../explanation/combine_docs.md)."
]
},
{
"cell_type": "markdown",
"id": "0b5660bf",
"metadata": {},
"source": [
"### Prepare Data\n",
"First we prepare the data. For this example we create multiple documents from one long one, but these documents could be fetched in any manner (the point of this notebook to highlight what to do AFTER you fetch the documents)."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "e9db25f3",
"metadata": {},
"outputs": [],
"source": [
"from langchain import OpenAI, PromptTemplate, LLMChain\n",
"from langchain.text_splitter import CharacterTextSplitter\n",
"from langchain.chains.mapreduce import MapReduceChain\n",
"\n",
"llm = OpenAI(temperature=0)\n",
"\n",
"\n",
"text_splitter = CharacterTextSplitter()\n"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "99bbe19b",
"metadata": {},
"outputs": [],
"source": [
"with open('../state_of_the_union.txt') as f:\n",
" state_of_the_union = f.read()\n",
"texts = text_splitter.split_text(state_of_the_union)"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "baa6e808",
"metadata": {},
"outputs": [],
"source": [
"from langchain.docstore.document import Document"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "8dff4f43",
"metadata": {},
"outputs": [],
"source": [
"docs = [Document(page_content=t) for t in texts[:3]]"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "27989fc4",
"metadata": {},
"outputs": [],
"source": [
"from langchain.chains.summarize import load_summarize_chain"
]
},
{
"cell_type": "markdown",
"id": "ea2d5c99",
"metadata": {},
"source": [
"### The `stuff` Chain\n",
"\n",
"This sections shows results of using the `stuff` Chain to do summarization."
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "f01f3196",
"metadata": {},
"outputs": [],
"source": [
"chain = load_summarize_chain(llm, chain_type=\"stuff\")"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "da4d9801",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"' In his speech, President Biden addressed the ongoing conflict between Russia and Ukraine, and the need for the United States and its allies to stand with Ukraine. He also discussed the American Rescue Plan, the Bipartisan Infrastructure Law, and the Bipartisan Innovation Act, which will help to create jobs, modernize infrastructure, and level the playing field with China. He also emphasized the importance of buying American products to support American jobs.'"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.run(docs)"
]
},
{
"cell_type": "markdown",
"id": "9c868e86",
"metadata": {},
"source": [
"### The `map_reduce` Chain\n",
"\n",
"This sections shows results of using the `map_reduce` Chain to do summarization."
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "ef28e1d4",
"metadata": {},
"outputs": [],
"source": [
"chain = load_summarize_chain(llm, chain_type=\"map_reduce\")"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "f82c5f9f",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"\" In response to Vladimir Putin's aggression in Ukraine, the US and its allies have taken action to hold him accountable, including economic sanctions, cutting off access to technology, and seizing the assets of Russian oligarchs. They are also providing military, economic, and humanitarian assistance to the Ukrainians, and releasing 60 million barrels of oil from reserves around the world. President Biden has passed several laws to provide economic relief to Americans and create jobs, and is making sure taxpayer dollars support American jobs and businesses.\""
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.run(docs)"
]
},
{
"cell_type": "markdown",
"id": "f61350f9",
"metadata": {},
"source": [
"### The `refine` Chain\n",
"\n",
"This sections shows results of using the `refine` Chain to do summarization."
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "3bcbe31e",
"metadata": {},
"outputs": [],
"source": [
"chain = load_summarize_chain(llm, chain_type=\"refine\")"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "c8cad866",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"\"\\nIn this speech, the speaker addresses the American people and their allies, discussing the recent aggression of Russia's Vladimir Putin in Ukraine. The speaker outlines the actions taken by the United States and its allies to hold Putin accountable, including economic sanctions, cutting off access to technology, and seizing the assets of Russian oligarchs. The speaker also announces the closing of American airspace to Russian flights, further isolating Russia and adding an additional squeeze on their economy. The Russian stock market has lost 40% of its value and trading remains suspended. Together with our allies, the United States is providing military, economic, and humanitarian assistance to Ukraine, and has mobilized forces to protect NATO countries. The speaker also announces the release of 60 million barrels of oil from reserves around the world, with the United States releasing 30 million barrels from its own Strategic Petroleum Reserve. The speaker emphasizes that the United States and its allies will defend every inch of NATO territory and that Putin will pay a high price for his aggression. The speaker also acknowledges the hardships faced by the American people due to the pandemic and the American Rescue Plan, which has provided immediate economic relief for tens of millions of Americans, helped put food on their table, keep a roof over their heads, and cut the cost of health insurance. The speaker\""
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.run(docs)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "0da92750",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.8"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

@ -14,7 +14,7 @@
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": 5,
"id": "bbbb4330",
"metadata": {},
"outputs": [],
@ -26,18 +26,18 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 7,
"id": "8ae5937c",
"metadata": {},
"outputs": [],
"source": [
"with open('../../state_of_the_union.txt') as f:\n",
"with open('../state_of_the_union.txt') as f:\n",
" state_of_the_union = f.read()"
]
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 2,
"id": "98739592",
"metadata": {},
"outputs": [],
@ -52,7 +52,7 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 4,
"id": "e9397934",
"metadata": {},
"outputs": [],
@ -78,17 +78,17 @@
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": 8,
"id": "f7caa1ee",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"' The speaker addresses the nation, noting that while last year they were kept apart due to COVID-19, this year they are together again. They are reminded that regardless of their political affiliations, they are all Americans.'"
"' This speech addresses the American people and acknowledges the difficulties of last year due to COVID-19. It emphasizes the importance of coming together regardless of political affiliation and encourages a sense of unity as Americans.'"
]
},
"execution_count": 7,
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
@ -122,7 +122,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.9"
"version": "3.10.8"
}
},
"nbformat": 4,

@ -0,0 +1,104 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "07c1e3b9",
"metadata": {},
"source": [
"# Vector DB Question/Answering\n",
"\n",
"This example showcases question answering over a vector database."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "82525493",
"metadata": {},
"outputs": [],
"source": [
"from langchain.embeddings.openai import OpenAIEmbeddings\n",
"from langchain.vectorstores.faiss import FAISS\n",
"from langchain.text_splitter import CharacterTextSplitter\n",
"from langchain import OpenAI, VectorDBQA"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "5c7049db",
"metadata": {},
"outputs": [],
"source": [
"with open('../state_of_the_union.txt') as f:\n",
" state_of_the_union = f.read()\n",
"text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\n",
"texts = text_splitter.split_text(state_of_the_union)\n",
"\n",
"embeddings = OpenAIEmbeddings()\n",
"docsearch = FAISS.from_texts(texts, embeddings)"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "3018f865",
"metadata": {},
"outputs": [],
"source": [
"qa = VectorDBQA.from_llm(llm=OpenAI(), vectorstore=docsearch)"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "032a47f8",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"\" The president said that Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator and federal public defender, and from a family of public school educators and police officers. He also said that she has received a broad range of support since she was nominated, from the Fraternal Order of Police to former judges appointed by Democrats and Republicans.\""
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"query = \"What did the president say about Ketanji Brown Jackson\"\n",
"qa.run(query)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f056f6fd",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.8"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

@ -0,0 +1,146 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "efc5be67",
"metadata": {},
"source": [
"# VectorDB Question Ansering with Sources\n",
"\n",
"This notebook goes over how to do question-answering with sources. It does this in a few different ways - first showing how you can use the `QAWithSourcesChain` to take in documents and use those, and next showing the `VectorDBQAWithSourcesChain`, which also does the lookup of the documents from a vector database. "
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "1c613960",
"metadata": {},
"outputs": [],
"source": [
"from langchain.embeddings.openai import OpenAIEmbeddings\n",
"from langchain.embeddings.cohere import CohereEmbeddings\n",
"from langchain.text_splitter import CharacterTextSplitter\n",
"from langchain.vectorstores.elastic_vector_search import ElasticVectorSearch\n",
"from langchain.vectorstores.faiss import FAISS"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "17d1306e",
"metadata": {},
"outputs": [],
"source": [
"with open('../state_of_the_union.txt') as f:\n",
" state_of_the_union = f.read()\n",
"text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\n",
"texts = text_splitter.split_text(state_of_the_union)\n",
"\n",
"embeddings = OpenAIEmbeddings()"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "0e745d99",
"metadata": {},
"outputs": [],
"source": [
"docsearch = FAISS.from_texts(texts, embeddings)"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "f42d79dc",
"metadata": {},
"outputs": [],
"source": [
"# Add in a fake source information\n",
"for i, d in enumerate(docsearch.docstore._dict.values()):\n",
" d.metadata = {'source': f\"{i}-pl\"}"
]
},
{
"cell_type": "markdown",
"id": "e6fc81de",
"metadata": {},
"source": [
"### VectorDBQAWithSourcesChain\n",
"\n",
"This shows how to use the `VectorDBQAWithSourcesChain`, which uses a vector database to look up relevant documents."
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "8aa571ae",
"metadata": {},
"outputs": [],
"source": [
"from langchain.chains import VectorDBQAWithSourcesChain"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "aa859d4c",
"metadata": {},
"outputs": [],
"source": [
"chain = VectorDBQAWithSourcesChain.from_llm(OpenAI(temperature=0), vectorstore=docsearch)"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "8ba36fa7",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'answer': ' The president thanked Justice Breyer for his service.',\n",
" 'sources': '27-pl'}"
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain({\"question\": \"What did the president say about Justice Breyer\"}, return_only_outputs=True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "980fae3b",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.8"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

@ -0,0 +1,10 @@
Integrations
============
The examples here all highlight a specific type of integration.
.. toctree::
:maxdepth: 1
:glob:
integrations/*

@ -0,0 +1,177 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "7ef4d402-6662-4a26-b612-35b542066487",
"metadata": {
"pycharm": {
"name": "#%% md\n"
}
},
"source": [
"# Embeddings & VectorStores\n",
"\n",
"This notebook show cases how to use embeddings to create a VectorStore"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "965eecee",
"metadata": {
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [],
"source": [
"from langchain.embeddings.openai import OpenAIEmbeddings\n",
"from langchain.text_splitter import CharacterTextSplitter\n",
"from langchain.vectorstores.elastic_vector_search import ElasticVectorSearch\n",
"from langchain.vectorstores.faiss import FAISS"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "68481687",
"metadata": {
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [],
"source": [
"with open('../state_of_the_union.txt') as f:\n",
" state_of_the_union = f.read()\n",
"text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\n",
"texts = text_splitter.split_text(state_of_the_union)\n",
"\n",
"embeddings = OpenAIEmbeddings()"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "015f4ff5",
"metadata": {
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [],
"source": [
"docsearch = FAISS.from_texts(texts, embeddings)\n",
"\n",
"query = \"What did the president say about Ketanji Brown Jackson\"\n",
"docs = docsearch.similarity_search(query)"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "67baf32e",
"metadata": {
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Tonight, Id like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n",
"\n",
"One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n",
"\n",
"And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nations top legal minds, who will continue Justice Breyers legacy of excellence. \n",
"\n",
"A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since shes been nominated, shes received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \n",
"\n",
"And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. \n"
]
}
],
"source": [
"print(docs[0].page_content)"
]
},
{
"cell_type": "markdown",
"id": "eea6e627",
"metadata": {},
"source": [
"## Requires having ElasticSearch setup"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "4906b8a3",
"metadata": {
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [],
"source": [
"docsearch = ElasticVectorSearch.from_texts(texts, embeddings, elasticsearch_url=\"http://localhost:9200\")\n",
"\n",
"query = \"What did the president say about Ketanji Brown Jackson\"\n",
"docs = docsearch.similarity_search(query)"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "95f9eee9",
"metadata": {
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Tonight, Id like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n",
"\n",
"One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n",
"\n",
"And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nations top legal minds, who will continue Justice Breyers legacy of excellence. \n",
"\n",
"A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since shes been nominated, shes received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \n",
"\n",
"And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. \n"
]
}
],
"source": [
"print(docs[0].page_content)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.7"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

@ -5,14 +5,14 @@
"id": "959300d4",
"metadata": {},
"source": [
"# Hugging Face Hub\n",
"# HuggingFace Hub\n",
"\n",
"This example showcases how to connect to the Hugging Face Hub."
"This example showcases how to connect to the HuggingFace Hub."
]
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 1,
"id": "3acf0069",
"metadata": {},
"outputs": [
@ -20,7 +20,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
"The Seattle Seahawks won the Super Bowl in 2010. Justin Beiber was born in 2010. The final answer: Seattle Seahawks.\n"
"The Seattle Seahawks won the Super Bowl in 2010. Justin Beiber was born in 2010. The\n"
]
}
],
@ -31,7 +31,7 @@
"\n",
"Answer: Let's think step by step.\"\"\"\n",
"prompt = PromptTemplate(template=template, input_variables=[\"question\"])\n",
"llm_chain = LLMChain(prompt=prompt, llm=HuggingFaceHub(repo_id=\"google/flan-t5-xl\", model_kwargs={\"temperature\":0, \"max_length\":64}))\n",
"llm_chain = LLMChain(prompt=prompt, llm=HuggingFaceHub(repo_id=\"google/flan-t5-xl\", model_kwargs={\"temperature\":1e-10}))\n",
"\n",
"question = \"What NFL team won the Super Bowl in the year Justin Beiber was born?\"\n",
"\n",
@ -63,7 +63,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.9"
"version": "3.8.7"
}
},
"nbformat": 4,

@ -0,0 +1,304 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "b118c9dc",
"metadata": {},
"source": [
"# Text Splitter\n",
"\n",
"When you want to deal wit long pieces of text, it is necessary to split up that text into chunks.\n",
"This notebook showcases several ways to do that.\n",
"\n",
"At a high level, text splitters work as following:\n",
"\n",
"1. Split the text up into small, semantically meaningful chunks (often sentences).\n",
"2. Start combining these small chunks into a larger chunk until you reach a certain size (as measured by some function).\n",
"3. Once you reach that size, make that chunk its own piece of text and then start creating a new chunk of text with some overlap (to keep context between chunks)."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "e82c4685",
"metadata": {},
"outputs": [],
"source": [
"from langchain.text_splitter import CharacterTextSplitter, NLTKTextSplitter, SpacyTextSplitter\n",
"# This is a long document we can split up.\n",
"with open('../state_of_the_union.txt') as f:\n",
" state_of_the_union = f.read()"
]
},
{
"cell_type": "markdown",
"id": "5c461b26",
"metadata": {},
"source": [
"## Character Text Splitting\n",
"\n",
"Let's start with the most simple method: let's split based on characters (by default \"\\n\\n\") and measure chunk length by number of characters."
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "79ff6737",
"metadata": {},
"outputs": [],
"source": [
"text_splitter = CharacterTextSplitter( \n",
" separator = \"\\n\\n\",\n",
" chunk_size = 1000,\n",
" chunk_overlap = 200,\n",
" length_function = len,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "38547666",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. \\n\\nLast year COVID-19 kept us apart. This year we are finally together again. \\n\\nTonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. \\n\\nWith a duty to one another to the American people to the Constitution. \\n\\nAnd with an unwavering resolve that freedom will always triumph over tyranny. \\n\\nSix days ago, Russias Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he badly miscalculated. \\n\\nHe thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. \\n\\nHe met the Ukrainian people. \\n\\nFrom President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world. \\n\\nGroups of citizens blocking tanks with their bodies. Everyone from students to retirees teachers turned soldiers defending their homeland. '"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"texts = text_splitter.split_text(state_of_the_union)\n",
"texts[0]"
]
},
{
"cell_type": "markdown",
"id": "13dc0983",
"metadata": {},
"source": [
"## HuggingFace Length Function\n",
"Most LLMs are constrained by the number of tokens that you can pass in, which is not the same as the number of characters. In order to get a more accurate estimate, we can use HuggingFace tokenizers to count the text length."
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "a8ce51d5",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.\n"
]
}
],
"source": [
"from transformers import GPT2TokenizerFast\n",
"\n",
"tokenizer = GPT2TokenizerFast.from_pretrained(\"gpt2\")"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "ca5e72c0",
"metadata": {},
"outputs": [],
"source": [
"text_splitter = CharacterTextSplitter.from_huggingface_tokenizer(tokenizer, chunk_size=100, chunk_overlap=0)\n",
"texts = text_splitter.split_text(state_of_the_union)"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "37cdfbeb",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. \n",
"\n",
"Last year COVID-19 kept us apart. This year we are finally together again. \n",
"\n",
"Tonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. \n",
"\n",
"With a duty to one another to the American people to the Constitution. \n",
"\n",
"And with an unwavering resolve that freedom will always triumph over tyranny. \n"
]
}
],
"source": [
"print(texts[0])"
]
},
{
"cell_type": "markdown",
"id": "7683b36a",
"metadata": {},
"source": [
"## tiktoken (OpenAI) Length Function\n",
"You can also use tiktoken, a open source tokenizer package from OpenAI to estimate tokens used. Will probably be ore accurate for their models."
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "825f7c0a",
"metadata": {},
"outputs": [],
"source": [
"text_splitter = CharacterTextSplitter.from_tiktoken_encoder(chunk_size=100, chunk_overlap=0)\n",
"texts = text_splitter.split_text(state_of_the_union)"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "ae35d165",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. \n",
"\n",
"Last year COVID-19 kept us apart. This year we are finally together again. \n",
"\n",
"Tonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. \n",
"\n",
"With a duty to one another to the American people to the Constitution. \n",
"\n",
"And with an unwavering resolve that freedom will always triumph over tyranny. \n"
]
}
],
"source": [
"print(texts[0])"
]
},
{
"cell_type": "markdown",
"id": "ea2973ac",
"metadata": {},
"source": [
"## NLTK Text Splitter\n",
"Rather than just splitting on \"\\n\\n\", we can use NLTK to split based on tokenizers."
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "20fa9c23",
"metadata": {},
"outputs": [],
"source": [
"text_splitter = NLTKTextSplitter(chunk_size=1000)"
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "5ea10835",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'Madam Speaker, Madam Vice President, our First Lady and Second Gentleman.\\n\\nMembers of Congress and the Cabinet.\\n\\nJustices of the Supreme Court.\\n\\nMy fellow Americans.\\n\\nLast year COVID-19 kept us apart.\\n\\nThis year we are finally together again.\\n\\nTonight, we meet as Democrats Republicans and Independents.\\n\\nBut most importantly as Americans.\\n\\nWith a duty to one another to the American people to the Constitution.\\n\\nAnd with an unwavering resolve that freedom will always triumph over tyranny.\\n\\nSix days ago, Russias Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways.\\n\\nBut he badly miscalculated.\\n\\nHe thought he could roll into Ukraine and the world would roll over.\\n\\nInstead he met a wall of strength he never imagined.\\n\\nHe met the Ukrainian people.\\n\\nFrom President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world.\\n\\nGroups of citizens blocking tanks with their bodies.\\n\\nEveryone from students to retirees teachers turned soldiers defending their homeland.'"
]
},
"execution_count": 16,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"texts = text_splitter.split_text(state_of_the_union)\n",
"texts[0]"
]
},
{
"cell_type": "markdown",
"id": "dab86b60",
"metadata": {},
"source": [
"## Spacy Text Splitter\n",
"Another alternative to NLTK is to use Spacy."
]
},
{
"cell_type": "code",
"execution_count": 17,
"id": "f9cc9dfc",
"metadata": {},
"outputs": [],
"source": [
"text_splitter = SpacyTextSplitter(chunk_size=1000)"
]
},
{
"cell_type": "code",
"execution_count": 19,
"id": "cef2b29e",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'Madam Speaker, Madam Vice President, our First Lady and Second Gentleman.\\n\\nMembers of Congress and the Cabinet.\\n\\nJustices of the Supreme Court.\\n\\nMy fellow Americans. \\n\\n\\n\\nLast year COVID-19 kept us apart.\\n\\nThis year we are finally together again.\\n\\n\\n\\n\\n\\nTonight, we meet as Democrats Republicans and Independents.\\n\\nBut most importantly as Americans.\\n\\n\\n\\n\\n\\nWith a duty to one another to the American people to the Constitution. \\n\\n\\n\\nAnd with an unwavering resolve that freedom will always triumph over tyranny.\\n\\n\\n\\n\\n\\nSix days ago, Russias Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways.\\n\\nBut he badly miscalculated.\\n\\n\\n\\n\\n\\nHe thought he could roll into Ukraine and the world would roll over.\\n\\nInstead he met a wall of strength he never imagined.\\n\\n\\n\\n\\n\\nHe met the Ukrainian people.\\n\\n\\n\\n\\n\\nFrom President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world.\\n\\n\\n\\n\\n\\nGroups of citizens blocking tanks with their bodies.\\n\\nEveryone from students to retirees teachers turned soldiers defending their homeland.'"
]
},
"execution_count": 19,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"texts = text_splitter.split_text(state_of_the_union)\n",
"texts[0]"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a1a118b1",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.8"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

@ -0,0 +1,11 @@
Memory
======
The examples here are all related to working with the concept of Memory in LangChain.
.. toctree::
:maxdepth: 1
:glob:
:caption: Memory
memory/*

@ -76,7 +76,7 @@
"text": [
"\n",
"\n",
"\u001b[1m> Entering new LLMChain chain...\u001b[0m\n",
"\u001b[1m> Entering new chain...\u001b[0m\n",
"Prompt after formatting:\n",
"\u001b[32;1m\u001b[1;3mYou are a chatbot having a conversation with a human.\n",
"\n",
@ -84,13 +84,13 @@
"Human: Hi there my friend\n",
"Chatbot:\u001b[0m\n",
"\n",
"\u001b[1m> Finished LLMChain chain.\u001b[0m\n"
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"' Hi there, how are you doing today?'"
"' Hi there!'"
]
},
"execution_count": 4,
@ -114,23 +114,23 @@
"text": [
"\n",
"\n",
"\u001b[1m> Entering new LLMChain chain...\u001b[0m\n",
"\u001b[1m> Entering new chain...\u001b[0m\n",
"Prompt after formatting:\n",
"\u001b[32;1m\u001b[1;3mYou are a chatbot having a conversation with a human.\n",
"\n",
"\n",
"Human: Hi there my friend\n",
"AI: Hi there, how are you doing today?\n",
"AI: Hi there!\n",
"Human: Not to bad - how are you?\n",
"Chatbot:\u001b[0m\n",
"\n",
"\u001b[1m> Finished LLMChain chain.\u001b[0m\n"
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"\" I'm doing great, thank you for asking!\""
"\"\\n\\nI'm doing well, thanks for asking. How about you?\""
]
},
"execution_count": 5,
@ -167,7 +167,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.9"
"version": "3.7.6"
}
},
"nbformat": 4,

@ -10,7 +10,7 @@
"This notebook goes over adding memory to an Agent. Before going through this notebook, please walkthrough the following notebooks, as this will build on top of both of them:\n",
"\n",
"- [Adding memory to an LLM Chain](adding_memory.ipynb)\n",
"- [Custom Agents](../../agents/examples/custom_agent.ipynb)\n",
"- [Custom Agents](../agents/custom_agent.ipynb)\n",
"\n",
"In order to add a memory to an agent we are going to the the following steps:\n",
"\n",
@ -22,25 +22,24 @@
},
{
"cell_type": "code",
"execution_count": 10,
"execution_count": 6,
"id": "8db95912",
"metadata": {},
"outputs": [],
"source": [
"from langchain.agents import ZeroShotAgent, Tool, AgentExecutor\n",
"from langchain.chains.conversation.memory import ConversationBufferMemory\n",
"from langchain import OpenAI, LLMChain\n",
"from langchain.utilities import GoogleSearchAPIWrapper"
"from langchain import OpenAI, SerpAPIWrapper, LLMChain"
]
},
{
"cell_type": "code",
"execution_count": 13,
"execution_count": 2,
"id": "97ad8467",
"metadata": {},
"outputs": [],
"source": [
"search = GoogleSearchAPIWrapper()\n",
"search = SerpAPIWrapper()\n",
"tools = [\n",
" Tool(\n",
" name = \"Search\",\n",
@ -60,7 +59,7 @@
},
{
"cell_type": "code",
"execution_count": 14,
"execution_count": 8,
"id": "e3439cd6",
"metadata": {},
"outputs": [],
@ -91,7 +90,7 @@
},
{
"cell_type": "code",
"execution_count": 15,
"execution_count": 16,
"id": "c56a0e73",
"metadata": {},
"outputs": [],
@ -103,7 +102,7 @@
},
{
"cell_type": "code",
"execution_count": 16,
"execution_count": 17,
"id": "ca4bc1fb",
"metadata": {},
"outputs": [
@ -117,19 +116,19 @@
"\u001b[32;1m\u001b[1;3mThought: I need to find out the population of Canada\n",
"Action: Search\n",
"Action Input: Population of Canada\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mThe current population of Canada is 38,566,192 as of Saturday, December 31, 2022, based on Worldometer elaboration of the latest United Nations data. · Canada ... Additional information related to Canadian population trends can be found on Statistics Canada's Population and Demography Portal. Population of Canada (real- ... Index to the latest information from the Census of Population. This survey conducted by Statistics Canada provides a statistical portrait of Canada and its ... 14 records ... Estimated number of persons by quarter of a year and by year, Canada, provinces and territories. The 2021 Canadian census counted a total population of 36,991,981, an increase of around 5.2 percent over the 2016 figure. ... Between 1990 and 2008, the ... ( 2 ) Census reports and other statistical publications from national statistical offices, ( 3 ) Eurostat: Demographic Statistics, ( 4 ) United Nations ... Canada is a country in North America. Its ten provinces and three territories extend from ... Population. • Q4 2022 estimate. 39,292,355 (37th). Information is available for the total Indigenous population and each of the three ... The term 'Aboriginal' or 'Indigenous' used on the Statistics Canada ... Jun 14, 2022 ... Determinants of health are the broad range of personal, social, economic and environmental factors that determine individual and population ... COVID-19 vaccination coverage across Canada by demographics and key populations. Updated every Friday at 12:00 PM Eastern Time.\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mThe current population of Canada is 38,555,354 as of Monday, December 19, 2022, based on Worldometer elaboration of the latest United Nations data. · Canada 2020 ...\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I now know the final answer\n",
"Final Answer: The current population of Canada is 38,566,192 as of Saturday, December 31, 2022, based on Worldometer elaboration of the latest United Nations data.\u001b[0m\n",
"Final Answer: The current population of Canada is 38,555,354.\u001b[0m\n",
"\u001b[1m> Finished AgentExecutor chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'The current population of Canada is 38,566,192 as of Saturday, December 31, 2022, based on Worldometer elaboration of the latest United Nations data.'"
"'The current population of Canada is 38,555,354.'"
]
},
"execution_count": 16,
"execution_count": 17,
"metadata": {},
"output_type": "execute_result"
}
@ -148,7 +147,7 @@
},
{
"cell_type": "code",
"execution_count": 17,
"execution_count": 20,
"id": "eecc0462",
"metadata": {},
"outputs": [
@ -161,20 +160,20 @@
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3mThought: I need to find out what the national anthem of Canada is called.\n",
"Action: Search\n",
"Action Input: National Anthem of Canada\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mJun 7, 2010 ... https://twitter.com/CanadaImmigrantCanadian National Anthem O Canada in HQ - complete with lyrics, captions, vocals & music.LYRICS:O Canada! Nov 23, 2022 ... After 100 years of tradition, O Canada was proclaimed Canada's national anthem in 1980. The music for O Canada was composed in 1880 by Calixa ... O Canada, national anthem of Canada. It was proclaimed the official national anthem on July 1, 1980. “God Save the Queen” remains the royal anthem of Canada ... O Canada! Our home and native land! True patriot love in all of us command. Car ton bras sait porter l'épée,. Il sait porter la croix! \"O Canada\" (French: Ô Canada) is the national anthem of Canada. The song was originally commissioned by Lieutenant Governor of Quebec Théodore Robitaille ... Feb 1, 2018 ... It was a simple tweak — just two words. But with that, Canada just voted to make its national anthem, “O Canada,” gender neutral, ... \"O Canada\" was proclaimed Canada's national anthem on July 1,. 1980, 100 years after it was first sung on June 24, 1880. The music. Patriotic music in Canada dates back over 200 years as a distinct category from British or French patriotism, preceding the first legal steps to ... Feb 4, 2022 ... English version: O Canada! Our home and native land! True patriot love in all of us command. With glowing hearts we ... Feb 1, 2018 ... Canada's Senate has passed a bill making the country's national anthem gender-neutral. If you're not familiar with the words to “O Canada,” ...\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I now know the final answer.\n",
"Final Answer: The national anthem of Canada is called \"O Canada\".\u001b[0m\n",
"Action Input: National anthem of Canada\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mAfter 100 years of tradition, O Canada was proclaimed Canada's national anthem in 1980. The music for O Canada was composed in 1880 by Calixa ...\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I now know the final answer\n",
"Final Answer: O Canada is the national anthem of Canada.\u001b[0m\n",
"\u001b[1m> Finished AgentExecutor chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'The national anthem of Canada is called \"O Canada\".'"
"'O Canada is the national anthem of Canada.'"
]
},
"execution_count": 17,
"execution_count": 20,
"metadata": {},
"output_type": "execute_result"
}
@ -195,7 +194,7 @@
},
{
"cell_type": "code",
"execution_count": 18,
"execution_count": 23,
"id": "3359d043",
"metadata": {},
"outputs": [],
@ -219,7 +218,7 @@
},
{
"cell_type": "code",
"execution_count": 19,
"execution_count": 24,
"id": "970d23df",
"metadata": {},
"outputs": [
@ -230,22 +229,22 @@
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3mThought: I need to find out the population of Canada\n",
"\u001b[32;1m\u001b[1;3mThought: I should look up the answer\n",
"Action: Search\n",
"Action Input: Population of Canada\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mThe current population of Canada is 38,566,192 as of Saturday, December 31, 2022, based on Worldometer elaboration of the latest United Nations data. · Canada ... Additional information related to Canadian population trends can be found on Statistics Canada's Population and Demography Portal. Population of Canada (real- ... Index to the latest information from the Census of Population. This survey conducted by Statistics Canada provides a statistical portrait of Canada and its ... 14 records ... Estimated number of persons by quarter of a year and by year, Canada, provinces and territories. The 2021 Canadian census counted a total population of 36,991,981, an increase of around 5.2 percent over the 2016 figure. ... Between 1990 and 2008, the ... ( 2 ) Census reports and other statistical publications from national statistical offices, ( 3 ) Eurostat: Demographic Statistics, ( 4 ) United Nations ... Canada is a country in North America. Its ten provinces and three territories extend from ... Population. • Q4 2022 estimate. 39,292,355 (37th). Information is available for the total Indigenous population and each of the three ... The term 'Aboriginal' or 'Indigenous' used on the Statistics Canada ... Jun 14, 2022 ... Determinants of health are the broad range of personal, social, economic and environmental factors that determine individual and population ... COVID-19 vaccination coverage across Canada by demographics and key populations. Updated every Friday at 12:00 PM Eastern Time.\u001b[0m\n",
"Action Input: population of Canada\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mThe current population of Canada is 38,555,354 as of Monday, December 19, 2022, based on Worldometer elaboration of the latest United Nations data. · Canada 2020 ...\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I now know the final answer\n",
"Final Answer: The current population of Canada is 38,566,192 as of Saturday, December 31, 2022, based on Worldometer elaboration of the latest United Nations data.\u001b[0m\n",
"Final Answer: The current population of Canada is 38,555,354.\u001b[0m\n",
"\u001b[1m> Finished AgentExecutor chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'The current population of Canada is 38,566,192 as of Saturday, December 31, 2022, based on Worldometer elaboration of the latest United Nations data.'"
"'The current population of Canada is 38,555,354.'"
]
},
"execution_count": 19,
"execution_count": 24,
"metadata": {},
"output_type": "execute_result"
}
@ -256,7 +255,7 @@
},
{
"cell_type": "code",
"execution_count": 20,
"execution_count": 25,
"id": "d9ea82f0",
"metadata": {},
"outputs": [
@ -270,19 +269,19 @@
"\u001b[32;1m\u001b[1;3mThought: I should look up the answer\n",
"Action: Search\n",
"Action Input: national anthem of [country]\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mMost nation states have an anthem, defined as \"a song, as of praise, devotion, or patriotism\"; most anthems are either marches or hymns in style. List of all countries around the world with its national anthem. ... Title and lyrics in the language of the country and translated into English, Aug 1, 2021 ... 1. Afghanistan, \"Milli Surood\" (National Anthem) · 2. Armenia, \"Mer Hayrenik\" (Our Fatherland) · 3. Azerbaijan (a transcontinental country with ... A national anthem is a patriotic musical composition symbolizing and evoking eulogies of the history and traditions of a country or nation. National Anthem of Every Country ; Fiji, “Meda Dau Doka” (“God Bless Fiji”) ; Finland, “Maamme”. (“Our Land”) ; France, “La Marseillaise” (“The Marseillaise”). You can find an anthem in the menu at the top alphabetically or you can use the search feature. This site is focussed on the scholarly study of national anthems ... Feb 13, 2022 ... The 38-year-old country music artist had the honor of singing the National Anthem during this year's big game, and she did not disappoint. Oldest of the World's National Anthems ; France, La Marseillaise (“The Marseillaise”), 1795 ; Argentina, Himno Nacional Argentino (“Argentine National Anthem”) ... Mar 3, 2022 ... Country music star Jessie James Decker gained the respect of music and hockey fans alike after a jaw-dropping rendition of \"The Star-Spangled ... This list shows the country on the left, the national anthem in the ... There are many countries over the world who have a national anthem of their own.\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3m\"Himno Nacional\" (\"National Anthem\") · \"Pátria\" (\"Fatherland\") · \"Salve, Oh Patria\" (\"We Salute You, Our Homeland\") · \"Bilady, Bilady, Bilady\" (\"My Country, My ...\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I now know the final answer\n",
"Final Answer: The national anthem of [country] is [name of anthem].\u001b[0m\n",
"Final Answer: The national anthem of [country] is called \"Himno Nacional\", \"Pátria\", \"Salve, Oh Patria\", and \"Bilady, Bilady, Bilady\".\u001b[0m\n",
"\u001b[1m> Finished AgentExecutor chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'The national anthem of [country] is [name of anthem].'"
"'The national anthem of [country] is called \"Himno Nacional\", \"Pátria\", \"Salve, Oh Patria\", and \"Bilady, Bilady, Bilady\".'"
]
},
"execution_count": 20,
"execution_count": 25,
"metadata": {},
"output_type": "execute_result"
}
@ -316,7 +315,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.9"
"version": "3.10.8"
}
},
"nbformat": 4,

@ -44,8 +44,8 @@
},
{
"cell_type": "code",
"execution_count": null,
"id": "48a5dd13",
"execution_count": 2,
"id": "12bbed4e",
"metadata": {},
"outputs": [],
"source": [
@ -55,7 +55,7 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 3,
"id": "ff065f58",
"metadata": {},
"outputs": [],
@ -66,7 +66,7 @@
},
{
"cell_type": "code",
"execution_count": 9,
"execution_count": 8,
"id": "1d45d429",
"metadata": {},
"outputs": [],
@ -78,9 +78,6 @@
" entities: dict = {}\n",
" # Define key to pass information about entities into prompt.\n",
" memory_key: str = \"entities\"\n",
" \n",
" def clear(self):\n",
" self.entities = {}\n",
"\n",
" @property\n",
" def memory_variables(self) -> List[str]:\n",
@ -120,7 +117,7 @@
},
{
"cell_type": "code",
"execution_count": 10,
"execution_count": 9,
"id": "c05159b6",
"metadata": {},
"outputs": [],
@ -150,7 +147,7 @@
},
{
"cell_type": "code",
"execution_count": 11,
"execution_count": 10,
"id": "f08dc8ed",
"metadata": {},
"outputs": [],
@ -169,7 +166,7 @@
},
{
"cell_type": "code",
"execution_count": 12,
"execution_count": 11,
"id": "5b96e836",
"metadata": {},
"outputs": [
@ -179,7 +176,7 @@
"text": [
"\n",
"\n",
"\u001b[1m> Entering new ConversationChain chain...\u001b[0m\n",
"\u001b[1m> Entering new chain...\u001b[0m\n",
"Prompt after formatting:\n",
"\u001b[32;1m\u001b[1;3mThe following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. You are provided with information about entities the Human mentions, if relevant.\n",
"\n",
@ -190,16 +187,16 @@
"Human: Harrison likes machine learning\n",
"AI:\u001b[0m\n",
"\n",
"\u001b[1m> Finished ConversationChain chain.\u001b[0m\n"
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"\" That's great to hear! Machine learning is a fascinating field of study. It involves using algorithms to analyze data and make predictions. Have you ever studied machine learning, Harrison?\""
"\"\\n\\nThat's really interesting! I'm sure he has a lot of fun with it.\""
]
},
"execution_count": 12,
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
@ -218,7 +215,7 @@
},
{
"cell_type": "code",
"execution_count": 13,
"execution_count": 12,
"id": "4bca7070",
"metadata": {},
"outputs": [
@ -228,7 +225,7 @@
"text": [
"\n",
"\n",
"\u001b[1m> Entering new ConversationChain chain...\u001b[0m\n",
"\u001b[1m> Entering new chain...\u001b[0m\n",
"Prompt after formatting:\n",
"\u001b[32;1m\u001b[1;3mThe following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. You are provided with information about entities the Human mentions, if relevant.\n",
"\n",
@ -239,16 +236,16 @@
"Human: What do you think Harrison's favorite subject in college was?\n",
"AI:\u001b[0m\n",
"\n",
"\u001b[1m> Finished ConversationChain chain.\u001b[0m\n"
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"' From what I know about Harrison, I believe his favorite subject in college was machine learning. He has expressed a strong interest in the subject and has mentioned it often.'"
"\" Harrison's favorite subject in college was machine learning.\""
]
},
"execution_count": 13,
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
@ -290,7 +287,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.9"
"version": "3.7.6"
}
},
"nbformat": 4,

@ -5,11 +5,9 @@
"id": "920a3c1a",
"metadata": {},
"source": [
"# Model Comparison\n",
"# Model Laboratory\n",
"\n",
"Constructing your language model application will likely involved choosing between many different options of prompts, models, and even chains to use. When doing so, you will want to compare these different options on different inputs in an easy, flexible, and intuitive way. \n",
"\n",
"LangChain provides the concept of a ModelLaboratory to test out and try different models."
"This example goes over basic functionality of how to use the ModelLaboratory to test out and try different models."
]
},
{
@ -248,7 +246,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.9"
"version": "3.8.7"
}
},
"nbformat": 4,

@ -0,0 +1,35 @@
LLMs & Prompts
==============
The examples here all highlight how to work with LLMs and prompts.
**LLMs**
`LLM Functionality <prompts/llm_functionality.ipynb>`_: A walkthrough of all the functionality the standard LLM interface exposes.
`LLM Serialization <prompts/llm_serialization.ipynb>`_: A walkthrough of how to serialize LLMs to and from disk.
`Custom LLM <prompts/custom_llm.ipynb>`_: How to create and use a custom LLM class, in case you have an LLM not from one of the standard providers (including one that you host yourself).
**Prompts**
`Prompt Management <prompts/prompt_management.ipynb>`_: A walkthrough of all the functionality LangChain supports for working with prompts.
`Prompt Serialization <prompts/prompt_serialization.ipynb>`_: A walkthrough of how to serialize prompts to and from disk.
`Few Shot Examples <prompts/few_shot_examples.ipynb>`_: How to include examples in the prompt.
`Generate Examples <prompts/generate_examples.ipynb>`_: How to use existing examples to generate more examples.
`Custom Example Selector <prompts/custom_example_selector.ipynb>`_: How to create and use a custom ExampleSelector (the class responsible for choosing which examples to use in a prompt).
`Custom Prompt Template <prompts/custom_prompt_template.ipynb>`_: How to create and use a custom PromptTemplate, the logic that decides how input variables get formatted into a prompt.
.. toctree::
:maxdepth: 1
:glob:
:hidden:
prompts/*

@ -0,0 +1,176 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "f897c784",
"metadata": {},
"source": [
"# Custom ExampleSelector\n",
"\n",
"This notebook goes over how to implement a custom ExampleSelector. ExampleSelectors are used to select examples to use in few shot prompts.\n",
"\n",
"An ExampleSelector must implement two methods:\n",
"\n",
"1. An `add_example` method which takes in an example and adds it into the ExampleSelector\n",
"2. A `select_examples` method which takes in input variables (which are meant to be user input) and returns a list of examples to use in the few shot prompt.\n",
"\n",
"\n",
"Let's implement a custom ExampleSelector that just selects two examples at random."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "1a945da1",
"metadata": {},
"outputs": [],
"source": [
"from langchain.prompts.example_selector.base import BaseExampleSelector\n",
"from typing import Dict, List\n",
"import numpy as np"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "62cf0ad7",
"metadata": {},
"outputs": [],
"source": [
"class CustomExampleSelector(BaseExampleSelector):\n",
" \n",
" def __init__(self, examples: List[Dict[str, str]]):\n",
" self.examples = examples\n",
" \n",
" def add_example(self, example: Dict[str, str]) -> None:\n",
" \"\"\"Add new example to store for a key.\"\"\"\n",
" self.examples.append(example)\n",
"\n",
" def select_examples(self, input_variables: Dict[str, str]) -> List[dict]:\n",
" \"\"\"Select which examples to use based on the inputs.\"\"\"\n",
" return np.random.choice(self.examples, size=2, replace=False)"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "242d3213",
"metadata": {},
"outputs": [],
"source": [
"examples = [{\"foo\": \"1\"}, {\"foo\": \"2\"}, {\"foo\": \"3\"}]\n",
"example_selector = CustomExampleSelector(examples)"
]
},
{
"cell_type": "markdown",
"id": "2a038065",
"metadata": {},
"source": [
"Let's now try it out! We can select some examples and try adding examples."
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "74fbbef5",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"array([{'foo': '2'}, {'foo': '3'}], dtype=object)"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"example_selector.select_examples({\"foo\": \"foo\"})"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "9bbb5421",
"metadata": {},
"outputs": [],
"source": [
"example_selector.add_example({\"foo\": \"4\"})"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "c0eb9f22",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[{'foo': '1'}, {'foo': '2'}, {'foo': '3'}, {'foo': '4'}]"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"example_selector.examples"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "cc39b1e3",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"array([{'foo': '1'}, {'foo': '4'}], dtype=object)"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"example_selector.select_examples({\"foo\": \"foo\"})"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1739dd96",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.6"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

@ -33,7 +33,7 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 7,
"id": "d5ceff02",
"metadata": {},
"outputs": [],
@ -67,7 +67,7 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 8,
"id": "10e5ece6",
"metadata": {},
"outputs": [],
@ -77,7 +77,7 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 9,
"id": "8cd49199",
"metadata": {},
"outputs": [
@ -87,7 +87,7 @@
"'This is a '"
]
},
"execution_count": 4,
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
@ -106,7 +106,7 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 10,
"id": "9c33fa19",
"metadata": {},
"outputs": [
@ -148,7 +148,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.9"
"version": "3.10.8"
}
},
"nbformat": 4,

@ -0,0 +1,116 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "a37d9694",
"metadata": {},
"source": [
"# Custom Prompt Template\n",
"\n",
"This notebook goes over how to create a custom prompt template, in case you want to create your own methodology for creating prompts.\n",
"\n",
"The only two requirements for all prompt templates are:\n",
"\n",
"1. They have a `input_variables` attribute that exposes what input variables this prompt template expects.\n",
"2. They expose a `format` method which takes in keyword arguments corresponding to the expected `input_variables` and returns the formatted prompt.\n",
"\n",
"Let's imagine that we want to create a prompt template that takes in input variables and formats them into the template AFTER capitalizing them. "
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "26f796e5",
"metadata": {},
"outputs": [],
"source": [
"from langchain.prompts import BasePromptTemplate\n",
"from pydantic import BaseModel"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "27919e96",
"metadata": {},
"outputs": [],
"source": [
"class CustomPromptTemplate(BasePromptTemplate, BaseModel):\n",
" template: str\n",
" \n",
" def format(self, **kwargs) -> str:\n",
" capitalized_kwargs = {k: v.upper() for k, v in kwargs.items()}\n",
" return self.template.format(**capitalized_kwargs)\n",
" "
]
},
{
"cell_type": "markdown",
"id": "76d1d84d",
"metadata": {},
"source": [
"We can now see that when we use this, the input variables get formatted."
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "eed1ff28",
"metadata": {},
"outputs": [],
"source": [
"prompt = CustomPromptTemplate(input_variables=[\"foo\"], template=\"Capitalized: {foo}\")"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "94892a3c",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'Capitalized: LOWERCASE'"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"prompt.format(foo=\"lowercase\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d3d9a7c7",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.6"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

@ -0,0 +1,306 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "f8b01b97",
"metadata": {},
"source": [
"# Few Shot Prompt examples\n",
"Notebook showing off how canonical prompts in LangChain can be recreated as FewShotPrompts"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "18c67cc9",
"metadata": {},
"outputs": [],
"source": [
"from langchain.prompts.few_shot import FewShotPromptTemplate\n",
"from langchain.prompts.prompt import PromptTemplate"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "2a729c9f",
"metadata": {},
"outputs": [],
"source": [
"# Self Ask with Search\n",
"\n",
"examples = [\n",
" {\n",
" \"question\": \"Who lived longer, Muhammad Ali or Alan Turing?\",\n",
" \"answer\": \"Are follow up questions needed here: Yes.\\nFollow up: How old was Muhammad Ali when he died?\\nIntermediate answer: Muhammad Ali was 74 years old when he died.\\nFollow up: How old was Alan Turing when he died?\\nIntermediate answer: Alan Turing was 41 years old when he died.\\nSo the final answer is: Muhammad Ali\"\n",
" },\n",
" {\n",
" \"question\": \"When was the founder of craigslist born?\",\n",
" \"answer\": \"Are follow up questions needed here: Yes.\\nFollow up: Who was the founder of craigslist?\\nIntermediate answer: Craigslist was founded by Craig Newmark.\\nFollow up: When was Craig Newmark born?\\nIntermediate answer: Craig Newmark was born on December 6, 1952.\\nSo the final answer is: December 6, 1952\"\n",
" },\n",
" {\n",
" \"question\": \"Who was the maternal grandfather of George Washington?\",\n",
" \"answer\": \"Are follow up questions needed here: Yes.\\nFollow up: Who was the mother of George Washington?\\nIntermediate answer: The mother of George Washington was Mary Ball Washington.\\nFollow up: Who was the father of Mary Ball Washington?\\nIntermediate answer: The father of Mary Ball Washington was Joseph Ball.\\nSo the final answer is: Joseph Ball\"\n",
" },\n",
" {\n",
" \"question\": \"Are both the directors of Jaws and Casino Royale from the same country?\",\n",
" \"answer\": \"Are follow up questions needed here: Yes.\\nFollow up: Who is the director of Jaws?\\nIntermediate Answer: The director of Jaws is Steven Spielberg.\\nFollow up: Where is Steven Spielberg from?\\nIntermediate Answer: The United States.\\nFollow up: Who is the director of Casino Royale?\\nIntermediate Answer: The director of Casino Royale is Martin Campbell.\\nFollow up: Where is Martin Campbell from?\\nIntermediate Answer: New Zealand.\\nSo the final answer is: No\"\n",
" }\n",
"]\n",
"example_prompt = PromptTemplate(input_variables=[\"question\", \"answer\"], template=\"Question: {question}\\n{answer}\")\n",
"\n",
"prompt = FewShotPromptTemplate(\n",
" examples=examples, \n",
" example_prompt=example_prompt, \n",
" suffix=\"Question: {input}\", \n",
" input_variables=[\"input\"]\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "95fc0059",
"metadata": {},
"outputs": [],
"source": [
"# ReAct\n",
"\n",
"examples = [\n",
" {\n",
" \"question\": \"What is the elevation range for the area that the eastern sector of the Colorado orogeny extends into?\",\n",
" \"answer\": \"Thought 1: I need to search Colorado orogeny, find the area that the eastern sector of the Colorado orogeny extends into, then find the elevation range of that area.\\nAction 1: Search[Colorado orogeny]\\nObservation 1: The Colorado orogeny was an episode of mountain building (an orogeny) in Colorado and surrounding areas.\\nThought 2: It does not mention the eastern sector. So I need to look up eastern sector.\\nAction 2: Lookup[eastern sector]\\nObservation 2: (Result 1 / 1) The eastern sector extends into the High Plains and is called the Central Plains orogeny.\\nThought 3: The eastern sector of Colorado orogeny extends into the High Plains. So I need to search High Plains and find its elevation range.\\nAction 3: Search[High Plains]\\nObservation 3: High Plains refers to one of two distinct land regions\\nThought 4: I need to instead search High Plains (United States).\\nAction 4: Search[High Plains (United States)]\\nObservation 4: The High Plains are a subregion of the Great Plains. From east to west, the High Plains rise in elevation from around 1,800 to 7,000 ft (550 to 2,130 m).[3]\\nThought 5: High Plains rise in elevation from around 1,800 to 7,000 ft, so the answer is 1,800 to 7,000 ft.\\nAction 5: Finish[1,800 to 7,000 ft]\"\n",
" },\n",
" {\n",
" \"question\": \"Musician and satirist Allie Goertz wrote a song about the \\\"The Simpsons\\\" character Milhouse, who Matt Groening named after who?\",\n",
" \"answer\": \"Thought 1: The question simplifies to \\\"The Simpsons\\\" character Milhouse is named after who. I only need to search Milhouse and find who it is named after.\\nAction 1: Search[Milhouse]\\nObservation 1: Milhouse Mussolini Van Houten is a recurring character in the Fox animated television series The Simpsons voiced by Pamela Hayden and created by Matt Groening.\\nThought 2: The paragraph does not tell who Milhouse is named after, maybe I can look up \\\"named after\\\".\\nAction 2: Lookup[named after]\\nObservation 2: (Result 1 / 1) Milhouse was named after U.S. president Richard Nixon, whose middle name was Milhous.\\nThought 3: Milhouse was named after U.S. president Richard Nixon, so the answer is Richard Nixon.\\nAction 3: Finish[Richard Nixon]\"\n",
" },\n",
" {\n",
" \"question\": \"Which documentary is about Finnish rock groups, Adam Clayton Powell or The Saimaa Gesture?\",\n",
" \"answer\": \"Thought 1: I need to search Adam Clayton Powell and The Saimaa Gesture, and find which documentary is about Finnish rock groups.\\nAction 1: Search[Adam Clayton Powell]\\nObservation 1 Could not find [Adam Clayton Powell]. Similar: [Adam Clayton Powell III, Seventh Avenue (Manhattan), Adam Clayton Powell Jr. State Office Building, Isabel Washington Powell, Adam Powell, Adam Clayton Powell (film), Giancarlo Esposito].\\nThought 2: To find the documentary, I can search Adam Clayton Powell (film).\\nAction 2: Search[Adam Clayton Powell (film)]\\nObservation 2: Adam Clayton Powell is a 1989 American documentary film directed by Richard Kilberg. The film is about the rise and fall of influential African-American politician Adam Clayton Powell Jr.[3][4] It was later aired as part of the PBS series The American Experience.\\nThought 3: Adam Clayton Powell (film) is a documentary about an African-American politician, not Finnish rock groups. So the documentary about Finnish rock groups must instead be The Saimaa Gesture.\\nAction 3: Finish[The Saimaa Gesture]\"\n",
" },\n",
" {\n",
" \"question\": \"What profession does Nicholas Ray and Elia Kazan have in common?\",\n",
" \"answer\": \"Thought 1: I need to search Nicholas Ray and Elia Kazan, find their professions, then find the profession they have in common.\\nAction 1: Search[Nicholas Ray]\\nObservation 1: Nicholas Ray (born Raymond Nicholas Kienzle Jr., August 7, 1911 - June 16, 1979) was an American film director, screenwriter, and actor best known for the 1955 film Rebel Without a Cause.\\nThought 2: Professions of Nicholas Ray are director, screenwriter, and actor. I need to search Elia Kazan next and find his professions.\\nAction 2: Search[Elia Kazan]\\nObservation 2: Elia Kazan was an American film and theatre director, producer, screenwriter and actor.\\nThought 3: Professions of Elia Kazan are director, producer, screenwriter, and actor. So profession Nicholas Ray and Elia Kazan have in common is director, screenwriter, and actor.\\nAction 3: Finish[director, screenwriter, actor]\"\n",
" },\n",
" {\n",
" \"question\": \"Which magazine was started first Arthurs Magazine or First for Women?\",\n",
" \"answer\": \"Thought 1: I need to search Arthurs Magazine and First for Women, and find which was started first.\\nAction 1: Search[Arthurs Magazine]\\nObservation 1: Arthurs Magazine (1844-1846) was an American literary periodical published in Philadelphia in the 19th century.\\nThought 2: Arthurs Magazine was started in 1844. I need to search First for Women next.\\nAction 2: Search[First for Women]\\nObservation 2: First for Women is a womans magazine published by Bauer Media Group in the USA.[1] The magazine was started in 1989.\\nThought 3: First for Women was started in 1989. 1844 (Arthurs Magazine) < 1989 (First for Women), so Arthurs Magazine was started first.\\nAction 3: Finish[Arthurs Magazine]\"\n",
" },\n",
" {\n",
" \"question\": \"Were Pavel Urysohn and Leonid Levin known for the same type of work?\",\n",
" \"answer\": \"Thought 1: I need to search Pavel Urysohn and Leonid Levin, find their types of work, then find if they are the same.\\nAction 1: Search[Pavel Urysohn]\\nObservation 1: Pavel Samuilovich Urysohn (February 3, 1898 - August 17, 1924) was a Soviet mathematician who is best known for his contributions in dimension theory.\\nThought 2: Pavel Urysohn is a mathematician. I need to search Leonid Levin next and find its type of work.\\nAction 2: Search[Leonid Levin]\\nObservation 2: Leonid Anatolievich Levin is a Soviet-American mathematician and computer scientist.\\nThought 3: Leonid Levin is a mathematician and computer scientist. So Pavel Urysohn and Leonid Levin have the same type of work.\\nAction 3: Finish[yes]\"\n",
" }\n",
"]\n",
"example_prompt = PromptTemplate(input_variables=[\"question\", \"answer\"], template=\"Question: {question}\\n{answer}\")\n",
"\n",
"prompt = FewShotPromptTemplate(\n",
" examples=examples, \n",
" example_prompt=example_prompt, \n",
" suffix=\"Question: {input}\", \n",
" input_variables=[\"input\"]\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "897d4e08",
"metadata": {},
"outputs": [],
"source": [
"# LLM Math\n",
"examples = [\n",
" {\n",
" \"question\": \"What is 37593 * 67?\",\n",
" \"answer\": \"```python\\nprint(37593 * 67)\\n```\\n```output\\n2518731\\n```\\nAnswer: 2518731\"\n",
" }\n",
"]\n",
"example_prompt = PromptTemplate(input_variables=[\"question\", \"answer\"], template=\"Question: {question}\\n\\n{answer}\")\n",
"\n",
"prompt = FewShotPromptTemplate(\n",
" examples=examples, \n",
" example_prompt=example_prompt, \n",
" suffix=\"Question: {input}\", \n",
" input_variables=[\"input\"]\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "7ab7379f",
"metadata": {},
"outputs": [],
"source": [
"# NatBot\n",
"example_seperator = \"==================================================\"\n",
"content_1 = \"\"\"<link id=1>About</link>\n",
"<link id=2>Store</link>\n",
"<link id=3>Gmail</link>\n",
"<link id=4>Images</link>\n",
"<link id=5>(Google apps)</link>\n",
"<link id=6>Sign in</link>\n",
"<img id=7 alt=\"(Google)\"/>\n",
"<input id=8 alt=\"Search\"></input>\n",
"<button id=9>(Search by voice)</button>\n",
"<button id=10>(Google Search)</button>\n",
"<button id=11>(I'm Feeling Lucky)</button>\n",
"<link id=12>Advertising</link>\n",
"<link id=13>Business</link>\n",
"<link id=14>How Search works</link>\n",
"<link id=15>Carbon neutral since 2007</link>\n",
"<link id=16>Privacy</link>\n",
"<link id=17>Terms</link>\n",
"<text id=18>Settings</text>\"\"\"\n",
"content_2 = \"\"\"<link id=1>About</link>\n",
"<link id=2>Store</link>\n",
"<link id=3>Gmail</link>\n",
"<link id=4>Images</link>\n",
"<link id=5>(Google apps)</link>\n",
"<link id=6>Sign in</link>\n",
"<img id=7 alt=\"(Google)\"/>\n",
"<input id=8 alt=\"Search\"></input>\n",
"<button id=9>(Search by voice)</button>\n",
"<button id=10>(Google Search)</button>\n",
"<button id=11>(I'm Feeling Lucky)</button>\n",
"<link id=12>Advertising</link>\n",
"<link id=13>Business</link>\n",
"<link id=14>How Search works</link>\n",
"<link id=15>Carbon neutral since 2007</link>\n",
"<link id=16>Privacy</link>\n",
"<link id=17>Terms</link>\n",
"<text id=18>Settings</text>\"\"\"\n",
"content_3 = \"\"\"<button id=1>For Businesses</button>\n",
"<button id=2>Mobile</button>\n",
"<button id=3>Help</button>\n",
"<button id=4 alt=\"Language Picker\">EN</button>\n",
"<link id=5>OpenTable logo</link>\n",
"<button id=6 alt =\"search\">Search</button>\n",
"<text id=7>Find your table for any occasion</text>\n",
"<button id=8>(Date selector)</button>\n",
"<text id=9>Sep 28, 2022</text>\n",
"<text id=10>7:00 PM</text>\n",
"<text id=11>2 people</text>\n",
"<input id=12 alt=\"Location, Restaurant, or Cuisine\"></input>\n",
"<button id=13>Lets go</button>\n",
"<text id=14>It looks like you're in Peninsula. Not correct?</text>\n",
"<button id=15>Get current location</button>\n",
"<button id=16>Next</button>\"\"\"\n",
"examples = [\n",
" {\n",
" \"i\": 1,\n",
" \"content\": content_1,\n",
" \"objective\": \"Find a 2 bedroom house for sale in Anchorage AK for under $750k\",\n",
" \"current_url\": \"https://www.google.com/\",\n",
" \"command\": 'TYPESUBMIT 8 \"anchorage redfin\"'\n",
" },\n",
" {\n",
" \"i\": 2,\n",
" \"content\": content_2,\n",
" \"objective\": \"Make a reservation for 4 at Dorsia at 8pm\",\n",
" \"current_url\": \"https://www.google.com/\",\n",
" \"command\": 'TYPESUBMIT 8 \"dorsia nyc opentable\"'\n",
" },\n",
" {\n",
" \"i\": 3,\n",
" \"content\": content_3,\n",
" \"objective\": \"Make a reservation for 4 for dinner at Dorsia in New York City at 8pm\",\n",
" \"current_url\": \"https://www.opentable.com/\",\n",
" \"command\": 'TYPESUBMIT 12 \"dorsia new york city\"'\n",
" },\n",
"]\n",
"example_prompt_template=\"\"\"EXAMPLE {i}:\n",
"==================================================\n",
"CURRENT BROWSER CONTENT:\n",
"------------------\n",
"{content}\n",
"------------------\n",
"OBJECTIVE: {objective}\n",
"CURRENT URL: {current_url}\n",
"YOUR COMMAND:\n",
"{command}\"\"\"\n",
"example_prompt = PromptTemplate(input_variables=[\"i\", \"content\", \"objective\", \"current_url\", \"command\"], template=example_prompt_template)\n",
"\n",
"\n",
"prefix = \"\"\"\n",
"You are an agent controlling a browser. You are given:\n",
"\t(1) an objective that you are trying to achieve\n",
"\t(2) the URL of your current web page\n",
"\t(3) a simplified text description of what's visible in the browser window (more on that below)\n",
"You can issue these commands:\n",
"\tSCROLL UP - scroll up one page\n",
"\tSCROLL DOWN - scroll down one page\n",
"\tCLICK X - click on a given element. You can only click on links, buttons, and inputs!\n",
"\tTYPE X \"TEXT\" - type the specified text into the input with id X\n",
"\tTYPESUBMIT X \"TEXT\" - same as TYPE above, except then it presses ENTER to submit the form\n",
"The format of the browser content is highly simplified; all formatting elements are stripped.\n",
"Interactive elements such as links, inputs, buttons are represented like this:\n",
"\t\t<link id=1>text</link>\n",
"\t\t<button id=2>text</button>\n",
"\t\t<input id=3>text</input>\n",
"Images are rendered as their alt text like this:\n",
"\t\t<img id=4 alt=\"\"/>\n",
"Based on your given objective, issue whatever command you believe will get you closest to achieving your goal.\n",
"You always start on Google; you should submit a search query to Google that will take you to the best page for\n",
"achieving your objective. And then interact with that page to achieve your objective.\n",
"If you find yourself on Google and there are no search results displayed yet, you should probably issue a command\n",
"like \"TYPESUBMIT 7 \"search query\"\" to get to a more useful page.\n",
"Then, if you find yourself on a Google search results page, you might issue the command \"CLICK 24\" to click\n",
"on the first link in the search results. (If your previous command was a TYPESUBMIT your next command should\n",
"probably be a CLICK.)\n",
"Don't try to interact with elements that you can't see.\n",
"Here are some examples:\n",
"\"\"\"\n",
"suffix=\"\"\"\n",
"The current browser content, objective, and current URL follow. Reply with your next command to the browser.\n",
"CURRENT BROWSER CONTENT:\n",
"------------------\n",
"{browser_content}\n",
"------------------\n",
"OBJECTIVE: {objective}\n",
"CURRENT URL: {url}\n",
"PREVIOUS COMMAND: {previous_command}\n",
"YOUR COMMAND:\n",
"\"\"\"\n",
"PROMPT = FewShotPromptTemplate(\n",
" examples = examples,\n",
" example_prompt=example_prompt,\n",
" example_separator=example_seperator,\n",
" input_variables=[\"browser_content\", \"url\", \"previous_command\", \"objective\"],\n",
" prefix=prefix,\n",
" suffix=suffix,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ce5927c6",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.6"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

@ -100,14 +100,23 @@
"text/plain": [
"['',\n",
" '',\n",
" 'Question: What is the difference between the Illinois and Missouri orogeny?',\n",
" 'Thought 1: I need to search Illinois and Missouri orogeny, and find the difference between them.',\n",
" 'Action 1: Search[Illinois orogeny]',\n",
" 'Observation 1: The Illinois orogeny is a hypothesized orogenic event that occurred in the Late Paleozoic either in the Pennsylvanian or Permian period.',\n",
" 'Thought 2: The Illinois orogeny is a hypothesized orogenic event. I need to search Missouri orogeny next and find its details.',\n",
" 'Action 2: Search[Missouri orogeny]',\n",
" 'Observation 2: The Missouri orogeny was a major tectonic event that occurred in the late Pennsylvanian and early Permian period (about 300 million years ago).',\n",
" 'Thought 3: The Illinois orogeny is hypothesized and occurred in the Late Paleozoic and the Missouri orogeny was a major tectonic event that occurred in the late Pennsylvanian and early Permian period. So the difference between the Illinois and Missouri orogeny is that the Illinois orogeny is hypothesized and occurred in the Late Paleozoic while the Missouri orogeny was a major']"
" 'Question: What is the highest mountain peak in North America?',\n",
" '',\n",
" 'Thought 1: I need to search North America and find the highest mountain peak.',\n",
" '',\n",
" 'Action 1: Search[North America]',\n",
" '',\n",
" 'Observation 1: North America is a continent entirely within the Northern Hemisphere and almost all within the Western Hemisphere.',\n",
" '',\n",
" 'Thought 2: I need to look up \"highest mountain peak\".',\n",
" '',\n",
" 'Action 2: Lookup[highest mountain peak]',\n",
" '',\n",
" 'Observation 2: (Result 1 / 1) Denali, formerly Mount McKinley, is the highest mountain peak in North America, with a summit elevation of 20,310 feet (6,190 m) above sea level.',\n",
" '',\n",
" 'Thought 3: Denali is the highest mountain peak in North America, with a summit elevation of 20,310 feet.',\n",
" '',\n",
" 'Action 3: Finish[20,310 feet]']"
]
},
"execution_count": 4,
@ -144,12 +153,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.9"
},
"vscode": {
"interpreter": {
"hash": "b1677b440931f40d89ef8be7bf03acb108ce003de0ac9b18e8d43753ea2e7103"
}
"version": "3.7.6"
}
},
"nbformat": 4,

@ -7,6 +7,5 @@
"presence_penalty": 0.0,
"n": 1,
"best_of": 1,
"request_timeout": null,
"_type": "openai"
}

Some files were not shown because too many files have changed in this diff Show More

Loading…
Cancel
Save