Compare commits

...

153 Commits

Author SHA1 Message Date
blob42
84d7ad397d langchain-docker readme 2023-03-03 22:55:44 +01:00
blob42
de551d62a8 linting in docker and parallel make jobs
- linting can be run in docker in parallel with `make -j4 docker.lint`
2023-03-03 22:55:44 +01:00
blob42
d8fd0e790c enable test + lint on docker 2023-03-03 22:55:44 +01:00
blob42
97c2b31cc5 added all extra dependencies to dev image + customized builds
- downgraded to python 3.10 to accomadate installing all dependencies
- by default installs all dev + extra dependencies
- option to install only dev dependencies by customizing .env file
2023-03-03 22:55:44 +01:00
blob42
f1dc03d0cc docker development image and helper makefile
separate makefile and build env:

- separate makefile for docker
- only show docker commands when docker detected in system
- only rebuild container on change
- use an unpriviliged user

builder image and base dev image:

- fully isolated environment inside container.
- all venv installed inside container shell and available as commands.
    - ex: `docker run IMG jupyter notebook` to launch notebook.
- pure python based container without poetry.
- custom motd to add a message displayed to users when they connect to
container.
- print environment versions (git, package, python) on login
- display help message when starting container
2023-03-03 22:55:44 +01:00
Harrison Chase
f76e9eaab1 bump version (#1342) 2023-03-03 22:55:44 +01:00
Harrison Chase
db2e9c2b0d partial variables (#1308) 2023-03-03 22:55:44 +01:00
Tim Asp
d22651d82a Add new iFixit document loader (#1333)
iFixit is a wikipedia-like site that has a huge amount of open content
on how to fix things, questions/answers for common troubleshooting and
"things" related content that is more technical in nature. All content
is licensed under CC-BY-SA-NC 3.0

Adding docs from iFixit as context for user questions like "I dropped my
phone in water, what do I do?" or "My macbook pro is making a whining
noise, what's wrong with it?" can yield significantly better responses
than context free response from LLMs.
2023-03-03 22:55:44 +01:00
Matt Robinson
c46478d70e feat: document loader for image files (#1330)
### Summary

Adds a document loader for image files such as `.jpg` and `.png` files.

### Testing

Run the following using the example document from the [`unstructured`
repo](https://github.com/Unstructured-IO/unstructured/tree/main/example-docs).

```python
from langchain.document_loaders.image import UnstructuredImageLoader

loader = UnstructuredImageLoader("layout-parser-paper-fast.jpg")
loader.load()
```
2023-03-03 22:55:44 +01:00
Eugene Yurtsev
e3fcc72879 Documentation: Minor typo fixes (#1327)
Fixing a few minor typos in the documentation (and likely introducing
other
ones in the process).
2023-03-03 22:55:44 +01:00
blob42
2fdb1d842b refactoring into submodules 2023-03-03 22:55:15 +01:00
blob42
c30ef7dbc4 drop network capabilities by default, example on using networking 2023-03-03 21:59:22 +01:00
blob42
8a7871ece3 add exec_attached: attach to running container and exec cmd 2023-03-03 21:22:45 +01:00
blob42
201ecdc9ee fix run and exec_run default commands, actually use gVisor
- run and exec_run need a separate default command. Run usually executes
  a script while exec_run simulates an interactive session. The image
  templates and run funcs have been upgraded to handle both
  types of commands.

- test: make docker tests run when docker is installed and docker lib
  avaialble.
  - test that runsc runtime is used by default when gVisor is installed.
    (manually removing gVisor skips the test)
2023-03-02 22:33:17 +01:00
blob42
149fe0055e exec_run fixes to keep stdin open 2023-03-02 20:39:48 +01:00
blob42
096b82f2a1 update notebook for utility 2023-03-02 20:32:10 +01:00
blob42
87b5a84cfb update tests and docstrings 2023-03-02 19:33:48 +01:00
blob42
ed97aa65af exec_run: add timeout and delay params
- use `delay` to wait for sent payload to finish
- use `timeout` to control how long to wait for output
2023-03-02 19:11:58 +01:00
blob42
c9e6baf60d image templates, enhanced wrapper building with custom prameters
- quickly run or exec_run commands with sane defaults
- wip image templates with parameters for common docker images
- shell escaping logic
- capture stdout+stderr for exec commands
- added minimal testing
2023-03-02 04:23:59 +01:00
blob42
7cde1cbfc3 docker: attach to container's stdin
- wip image helper for optimized params with common images
- gVisor runtime checker
- make tests skipped if docker installed
2023-02-27 18:31:06 +01:00
blob42
17213209e0 stream stdin and stdout to container through docker API's socket 2023-02-27 18:31:06 +01:00
blob42
895f862662 docker wrapper tool for untrusted execution 2023-02-27 18:31:06 +01:00
Harrison Chase
f61858163d
bump version to 0.0.95 (#1324) 2023-02-27 07:45:54 -08:00
Harrison Chase
0824d65a5c
Harrison/indexing pipeline (#1317) 2023-02-27 00:31:36 -08:00
Akshay
a0bf856c70
Update agent_vectorstore.ipynb (#1318)
nitpicking but just thought i'd add this typo which I found when going
through the How-to 😄 (unless it was intentional) also, it's amazing that
you added ReAct to LangChain!
2023-02-26 23:22:35 -08:00
Harrison Chase
166cda2cc6
Harrison/deeplake (#1316)
Co-authored-by: Davit Buniatyan <d@activeloop.ai>
2023-02-26 22:35:04 -08:00
Harrison Chase
aaad6cc954
Harrison/atlas db (#1315)
Co-authored-by: Brandon Duderstadt <brandonduderstadt@gmail.com>
2023-02-26 22:11:38 -08:00
Marc Puig
3989c793fd
Making it possible to use "certainty" as a parameter for the weaviate similarity_search (#1218)
Checking if weaviate similarity_search kwargs contains "certainty" and
use it accordingly. The minimal level of certainty must be a float, and
it is computed by normalized distance.
2023-02-26 17:55:28 -08:00
Alexander Hoyle
42b892c21b
Avoid IntegrityError for SQLiteCache updates (#1286)
While using a `SQLiteCache`, if there are duplicate `(prompt, llm, idx)`
tuples passed to
[`update_cache()`](c5dd491a21/langchain/llms/base.py (L39)),
then an `IntegrityError` is thrown. This can happen when there are
duplicated prompts within the same batch.

This PR changes the SQLAlchemy `session.add()` to a `session.merge()` in
`cache.py`, [following the solution from this SO
thread](https://stackoverflow.com/questions/10322514/dealing-with-duplicate-primary-keys-on-insert-in-sqlalchemy-declarative-style).
I believe this fixes #983, but not entirely sure since that also
involves async

Here's a minimal example of the error:
```python
from pathlib import Path

import langchain
from langchain.cache import SQLiteCache

llm = langchain.OpenAI(model_name="text-ada-001", openai_api_key=Path("/.openai_api_key").read_text().strip())
langchain.llm_cache = SQLiteCache("test_cache.db")
llm.generate(['a'] * 5)
```
```
>   IntegrityError: (sqlite3.IntegrityError) UNIQUE constraint failed: full_llm_cache.prompt, full_llm_cache.llm, full_llm_cache.idx
    [SQL: INSERT INTO full_llm_cache (prompt, llm, idx, response) VALUES (?, ?, ?, ?)]
    [parameters: ('a', "[('_type', 'openai'), ('best_of', 1), ('frequency_penalty', 0), ('logit_bias', {}), ('max_tokens', 256), ('model_name', 'text-ada-001'), ('n', 1), ('presence_penalty', 0), ('request_timeout', None), ('stop', None), ('temperature', 0.7), ('top_p', 1)]", 0, '\n\nA is for air.\n\nA is for atmosphere.')]
    (Background on this error at: https://sqlalche.me/e/14/gkpj)
```

After the change, we now have the following
```python
class Output:
    def __init__(self, text):
        self.text = text

# make dummy data
cache = SQLiteCache("test_cache_2.db")
cache.update(prompt="prompt_0", llm_string="llm_0", return_val=[Output("text_0")])
cache.engine.execute("SELECT * FROM full_llm_cache").fetchall()

# output
>   [('prompt_0', 'llm_0', 0, 'text_0')]
```

```python
#  update data, before change this would have thrown an `IntegrityError`
cache.update(prompt="prompt_0", llm_string="llm_0", return_val=[Output("text_0_new")])
cache.engine.execute("SELECT * FROM full_llm_cache").fetchall()

# output
>   [('prompt_0', 'llm_0', 0, 'text_0_new')]
```
2023-02-26 17:54:43 -08:00
Harrison Chase
81abcae91a
Harrison/banana fix (#1311)
Co-authored-by: Erik Dunteman <44653944+erik-dunteman@users.noreply.github.com>
2023-02-26 17:53:57 -08:00
Casey A. Fitzpatrick
648b3b3909
Fix use case sentence for bash util doc (#1295)
Thanks for all your hard work!

I noticed a small typo in the bash util doc so here's a quick update.
Additionally, my formatter caught some spacing in the `.md` as well.
Happy to revert that if it's an issue.

The main change is just
```
- A common use case this is for letting it interact with your local file system. 

+ A common use case for this is letting the LLM interact with your local file system.
```

## Testing

`make docs_build` succeeds locally and the changes show as expected ✌️ 
<img width="704" alt="image"
src="https://user-images.githubusercontent.com/17773666/221376160-e99e59a6-b318-49d1-a1d7-89f5c17cdab4.png">
2023-02-26 17:41:03 -08:00
Ingo Kleiber
fd9975dad7
add CoNLL-U document loader (#1297)
I've added a simple
[CoNLL-U](https://universaldependencies.org/format.html) document
loader. CoNLL-U is a common format for NLP tasks and is used, for
example, in the Universal Dependencies treebank corpora. The loader
reads a single file in standard CoNLL-U format and returns a document.
2023-02-26 17:27:00 -08:00
Harrison Chase
d29f74114e
copy paste loader (#1302) 2023-02-26 17:26:37 -08:00
Harrison Chase
ce441edd9c
improve docs (#1309) 2023-02-26 11:25:16 -08:00
Harrison Chase
6f30d68581
add example of using agent with vectorstores (#1285) 2023-02-25 13:27:24 -08:00
Harrison Chase
002da6edc0
ruff ruff (#1203) 2023-02-25 08:59:52 -08:00
Harrison Chase
0963096491
fix imports (#1288) 2023-02-25 08:48:02 -08:00
Harrison Chase
c5dd491a21
bump version to 0094 (#1280) 2023-02-24 08:26:34 -08:00
Matt Robinson
2f15c11b87
feat: document loader for MS Word documents (#1282)
### Summary

Adds a document loader for MS Word Documents. Works with both `.docx`
and `.doc` files as longer as the user has installed
`unstructured>=0.4.11`.

### Testing

The follow workflow test the loader for both `.doc` and `.docx` files
using example docs from the `unstructured` repo.

#### `.docx`

```python
from langchain.document_loaders import UnstructuredWordDocumentLoader

filename = "../unstructured/example-docs/fake.docx"
loader = UnstructuredWordDocumentLoader(filename)
loader.load()
```

#### `.doc`

```python
from langchain.document_loaders import UnstructuredWordDocumentLoader

filename = "../unstructured/example-docs/fake.doc"
loader = UnstructuredWordDocumentLoader(filename)
loader.load()
```
2023-02-24 08:26:19 -08:00
Harrison Chase
96db6ed073
cleanup (#1274) 2023-02-24 07:38:24 -08:00
Harrison Chase
7e8f832cd6
Harrison/cohere params (#1278)
Co-authored-by: Stefano Faraggi <40745694+stepp1@users.noreply.github.com>
2023-02-24 07:37:58 -08:00
Harrison Chase
a8e88e1874
Harrison/logprobs (#1279)
Co-authored-by: Prateek Shah <97124740+prateekspanning@users.noreply.github.com>
2023-02-24 07:37:45 -08:00
Harrison Chase
42167a1e24
Harrison/fb loader (#1277)
Co-authored-by: Vairo Di Pasquale <vairo.dp@gmail.com>
2023-02-24 07:22:48 -08:00
Harrison Chase
bb53d9722d
Harrison/errors (#1276)
Co-authored-by: Kevin Huo <5000881+kwhuo68@users.noreply.github.com>
2023-02-24 07:13:47 -08:00
Klein Tahiraj
8a0751dadd
adding .ipynb loader and documentation Fixes #1248 (#1252)
`NotebookLoader.load()` loads the `.ipynb` notebook file into a
`Document` object.

**Parameters**:

* `include_outputs` (bool): whether to include cell outputs in the
resulting document (default is False).
* `max_output_length` (int): the maximum number of characters to include
from each cell output (default is 10).
* `remove_newline` (bool): whether to remove newline characters from the
cell sources and outputs (default is False).
* `traceback` (bool): whether to include full traceback (default is
False).
2023-02-24 07:10:35 -08:00
Harrison Chase
4b5d427421
Harrison/source docs (#1275)
Co-authored-by: Tushar Dhadiwal <tushardhadiwal@users.noreply.github.com>
2023-02-24 07:09:10 -08:00
Enrico Shippole
9becdeaadf
Add Writer, Banana, Modal, StochasticAI (#1270)
Add LLM wrappers and examples for Banana, Writer, Modal, Stochastic AI

Added rigid json format for Banana and Modal
2023-02-24 06:58:58 -08:00
5457d48416
searx: add query_suffix parameter (#1259)
- allows to build tools and dynamically inject extra searxh suffix in
  the query. example:
  `search.run("python library", query_suffix="site:github.com")`
 resulting query: `python library site:github.com`

Co-authored-by: blob42 <spike@w530>
2023-02-23 16:00:40 -08:00
Harrison Chase
9381005098
fix bug with length function (#1257) 2023-02-23 16:00:15 -08:00
Matt Robinson
10e73a3723
docs: remove nltk download steps (#1253)
### Summary

Updates the docs to remove the `nltk` download steps from
`unstructured`. As of `unstructured` `0.4.14`, this is handled
automatically in the relevant modules within `unstructured`.
2023-02-23 12:34:44 -08:00
Justin Torre
5bc6dc076e
added caching and properties docs (#1255) 2023-02-23 11:03:04 -08:00
Harrison Chase
6d37d089e9
bump version to 0093 (#1251) 2023-02-23 08:00:42 -08:00
Iskren Ivov Chernev
8e3cd3e0dd
Add DeepInfra LLM support (#1232)
DeepInfra is an Inference-as-a-Service provider. Add a simple wrapper
using HTTPS requests.
2023-02-23 07:37:15 -08:00
Dmitri Melikyan
b7765a95a0
docs: add Graphsignal ecosystem page (#1228)
Adds a Graphsignal ecosystem page
2023-02-23 07:33:00 -08:00
Satoru Sakamoto
d480330fae
fix to specific language transcript (#1231)
Currently youtube loader only seems to support English audio. 
Changed to load videos in the specified language.
2023-02-23 07:32:46 -08:00
Harrison Chase
6085fe18d4
add ifttt tool (#1244) 2023-02-22 22:29:43 -08:00
Jon Luo
8a35811556
Don't instruct LLM to use the LIMIT clause, which is incompatible with SQL Server (#1242)
The current prompt specifically instructs the LLM to use the `LIMIT`
clause. This will cause issues with MS SQL Server, which uses `SELECT
TOP` instead of `LIMIT`. The generated SQL will use `LIMIT`; the
instruction to "always limit... using the LIMIT clause" seems to
override the "create a syntactically correct mssql query to run"
portion. Reported here:
https://github.com/hwchase17/langchain/issues/1103#issuecomment-1441144224

I don't have access to a SQL Server instance to test, but removing that
part of the prompt in OpenAI Playground results in the correct `SELECT
TOP` syntax, whereas keeping it in results in the `LIMIT` clause, even
when instructing it to generate syntactically correct mssql. It's also
still correctly using `LIMIT` in my MariaDB database. I think in this
case we can assume that the model will select the appropriate method
based on the dialect specified.

In general, it would be nice to be able to test a suite of SQL dialects
for things like dialect-specific syntax and other issues we've run into
in the past, but I'm not quite sure how to best approach that yet.
2023-02-22 22:21:26 -08:00
Harrison Chase
71709ad5d5
Update key_concepts.md (#1209) (#1237)
Link for easier navigation (it's not immediately clear where to find
more info on SimpleSequentialChain (3 clicks away)

---------

Co-authored-by: Larry Fisherman <l4rryfisherman@protonmail.com>
2023-02-22 13:30:53 -08:00
Dennis Antela Martinez
53c67e04d4
add aleph alpha llm (#1207)
Integrate Aleph Alpha's client into Langchain to provide access to the
luminous models - more info on latest benchmarks here:
https://www.aleph-alpha.com/luminous-performance-benchmarks
2023-02-22 10:37:36 -08:00
Klein Tahiraj
c6ab1bb3cb
Fixing typo in loading.py (#1235)
Just fixing a typo I found in loading.py
2023-02-22 10:36:14 -08:00
Ikko Eltociear Ashimine
334b553260
Update petals.md (#1225)
Huggingface -> Hugging Face
2023-02-22 10:34:16 -08:00
Jon Luo
ac1320aae8
fix sqlite internal tables breaking table_info (#1224)
With the current method used to get the SQL table info, sqlite internal
schema tables are being included and are not being handled correctly by
sqlalchemy because the columns have no types. This is easy to see with
the Chinook database:
```python
db = SQLDatabase.from_uri("sqlite:///Chinook.db")
print(db.table_info)
```
```python
...
sqlalchemy.exc.CompileError: (in table 'sqlite_sequence', column 'name'): Can't generate DDL for NullType(); did you forget to specify a type on this Column?
```

SQLAlchemy 2.0 [ignores these by
default](63d90b0f44/lib/sqlalchemy/dialects/sqlite/base.py (L856-L880)):

63d90b0f44/lib/sqlalchemy/dialects/sqlite/base.py (L2096-L2123)
2023-02-22 10:34:05 -08:00
djacobs7
4e28982d2b
Fix typo in constitutional_ai base.py (#1216)
Found a typo in the documentation code for the constitutional_ai module
2023-02-21 17:03:44 -08:00
Sason
cc7d2e5621
Correct typo in "Question Answering" How-To Guide (#1221) 2023-02-21 17:02:58 -08:00
424e71705d
searx: remove duplicate param (#1219)
Co-authored-by: blob42 <spike@w530>
2023-02-21 17:02:42 -08:00
Harrison Chase
4e43b0efe9
bump version 0092 (#1204) 2023-02-21 08:56:07 -08:00
Matt Robinson
3d5f56a8a1
docs: add quotes to unstructured[local-inference] install instructions (#1208)
### Summary

Corrects the install instruction for local inference to `pip install
"unstructured[local-inference]"`
2023-02-21 08:06:43 -08:00
Harrison Chase
047231840d
add docs for chroma persistance (#1202) 2023-02-20 23:04:17 -08:00
Harrison Chase
5bdb8dd6fe
Harrison/unstructured io (#1200) 2023-02-20 22:54:49 -08:00
Harrison Chase
d90a287d8f
Harrison/updating docs (#1196) 2023-02-20 22:54:26 -08:00
Harrison Chase
b7708bbec6
rfc: callback changes (#1165)
conceptually, no reason a tool should know what an "agent action" is

unless any objections, can change in all callback handlers
2023-02-20 22:54:15 -08:00
Harrison Chase
fb83cd4ff4
catch networkx error (#1201) 2023-02-20 21:43:02 -08:00
Harrison Chase
44c8d8a9ac
move serpapi wrapper (#1199)
Co-authored-by: Tim Asp <707699+timothyasp@users.noreply.github.com>
2023-02-20 21:15:45 -08:00
Konstantin Hebenstreit
af94f1dd97
HuggingFaceEndpoint: Correct Example for ImportError (#1176)
When I try to import the Class HuggingFaceEndpoint I get an Import
Error: cannot import name 'HuggingFaceEndpoint' from 'langchain'.
(langchain version 0.0.88)
These two imports work fine: from langchain import HuggingFacePipeline
and from langchain import HuggingFaceHub.

So I corrected the import statement in the example. There is probably a
better solution to this, but this fixes the Error for me.
2023-02-20 21:09:39 -08:00
Harrison Chase
0c84ce1082
Harrison/add documents (#1197)
Co-authored-by: OmriNach <32659330+OmriNach@users.noreply.github.com>
2023-02-20 21:02:28 -08:00
Francisco Ingham
0b6a650cb4
added ability to override default verbose and memory when load chain … (#1153)
It is useful to be able to specify `verbose` or `memory` while still
keeping the chain's overall structure.

---------

Co-authored-by: Francisco Ingham <>
2023-02-20 21:00:32 -08:00
Anton Troynikov
d2ef5d6167
Default Chroma collection name (#1198)
For persistence, it's convenient to have a default collection name which
gets used everywhere.
2023-02-20 20:59:34 -08:00
Dennis Antela Martinez
23243ae69c
add gitbook document loader (#1180)
Added a GitBook document loader. It lets you both, (1) fetch text from
any single GitBook page, or (2) fetch all relative paths and return
their respective content in Documents.

I've modified the `scrape` method in the `WebBaseLoader` to accept
custom web paths if given, but happy to remove it and move that logic
into the `GitbookLoader` itself.
2023-02-20 20:05:04 -08:00
William FH
13ba0177d0
Add a StdIn "Interaction" Tool (#1193)
Lets a chain prompt the user for more input as a part of its execution.
2023-02-20 18:40:02 -08:00
Naveen Tatikonda
0118706fd6
Add Support for OpenSearch Vector database (#1191)
### Description
This PR adds a wrapper which adds support for the OpenSearch vector
database. Using opensearch-py client we are ingesting the embeddings of
given text into opensearch cluster using Bulk API. We can perform the
`similarity_search` on the index using the 3 popular searching methods
of OpenSearch k-NN plugin:

- `Approximate k-NN Search` use approximate nearest neighbor (ANN)
algorithms from the [nmslib](https://github.com/nmslib/nmslib),
[faiss](https://github.com/facebookresearch/faiss), and
[Lucene](https://lucene.apache.org/) libraries to power k-NN search.
- `Script Scoring` extends OpenSearch’s script scoring functionality to
execute a brute force, exact k-NN search.
- `Painless Scripting` adds the distance functions as painless
extensions that can be used in more complex combinations. Also, supports
brute force, exact k-NN search like Script Scoring.

### Issues Resolved 
https://github.com/hwchase17/langchain/issues/1054

---------

Signed-off-by: Naveen Tatikonda <navtat@amazon.com>
2023-02-20 18:39:34 -08:00
Andrew White
c5015d77e2
Allow k to be higher than doc size in max_marginal_relevance_search (#1187)
Fixes issue #1186. For some reason, #1117 didn't seem to fix it.
2023-02-20 16:39:13 -08:00
Zach Schillaci
159c560c95
Refactor some loops into list comprehensions (#1185) 2023-02-20 16:38:43 -08:00
Harrison Chase
926c121b98
Harrison/text splitter docs (#1188) 2023-02-20 15:14:03 -08:00
Harrison Chase
91446a5e9b
clean up text splitting docs (#1184) 2023-02-20 11:24:31 -08:00
Harrison Chase
a5a14405ad
bump version to 0091 (#1181) 2023-02-20 08:53:45 -08:00
Harrison Chase
5a954efdd7
update gallery with slack bot (#1177) 2023-02-20 08:21:00 -08:00
Harrison Chase
4766b20223
clean up loaders (#1178) 2023-02-20 08:20:48 -08:00
9962bda70b
searx_search: docs updates (#1175)
- fix notebook formatting, remove empty cells and add scrolling for long
text

---------

Co-authored-by: blob42 <spike@w530>
2023-02-20 06:46:44 -08:00
Harrison Chase
4f3fbd7267
improve docs for indexes (#1146) 2023-02-19 23:14:50 -08:00
Harrison Chase
28781a6213
Harrison/markdown splitter (#1169)
Co-authored-by: Michael Chen <flamingdescent@gmail.com>
Co-authored-by: Michael Chen <michaelchen@stripe.com>
2023-02-19 21:31:58 -08:00
Harrison Chase
37dd34bea5
fix path (#1168) 2023-02-19 21:28:49 -08:00
Nan Wang
e8f224fd3a
docs: add missing links to toc (#1163)
add missing links to toc

---------

Signed-off-by: Nan Wang <nan.wang@jina.ai>
2023-02-19 21:15:11 -08:00
Nick
afe884fb96
AI21 documentation incorrectly titled Cohere (#1167) 2023-02-19 21:14:59 -08:00
Ji
ed37fbaeff
for ChatVectorDBChain, add top_k_docs_for_context to allow control how many chunks of context will be retrieved (#1155)
given that we allow user define chunk size, think it would be useful for
user to define how many chunks of context will be retrieved.
2023-02-19 20:48:23 -08:00
Harrison Chase
955c89fccb
pass in prompts to vectordbqa (#1158) 2023-02-19 20:47:17 -08:00
Harrison Chase
65cc81c479
directory loader improvements (#1162) 2023-02-19 20:47:08 -08:00
Harrison Chase
05a05bcb04
bump version to 0.0.90 (#1157) 2023-02-19 12:53:55 -08:00
Harrison Chase
9d6d8f85da
Harrison/self hosted runhouse (#1154)
Co-authored-by: Donny Greenberg <dongreenberg2@gmail.com>
Co-authored-by: John Dagdelen <jdagdelen@users.noreply.github.com>
Co-authored-by: Harrison Chase <harrisonchase@Harrisons-MBP.attlocal.net>
Co-authored-by: Andrew White <white.d.andrew@gmail.com>
Co-authored-by: Peng Qu <82029664+pengqu123@users.noreply.github.com>
Co-authored-by: Matt Robinson <mthw.wm.robinson@gmail.com>
Co-authored-by: jeff <tangj1122@gmail.com>
Co-authored-by: Harrison Chase <harrisonchase@Harrisons-MacBook-Pro.local>
Co-authored-by: zanderchase <zander@unfold.ag>
Co-authored-by: Charles Frye <cfrye59@gmail.com>
Co-authored-by: zanderchase <zanderchase@gmail.com>
Co-authored-by: Shahriar Tajbakhsh <sh.tajbakhsh@gmail.com>
Co-authored-by: Stefan Keselj <skeselj@princeton.edu>
Co-authored-by: Francisco Ingham <fpingham@gmail.com>
Co-authored-by: Dhruv Anand <105786647+dhruv-anand-aintech@users.noreply.github.com>
Co-authored-by: cragwolfe <cragcw@gmail.com>
Co-authored-by: Anton Troynikov <atroyn@users.noreply.github.com>
Co-authored-by: William FH <13333726+hinthornw@users.noreply.github.com>
Co-authored-by: Oliver Klingefjord <oliver@klingefjord.com>
Co-authored-by: blob42 <contact@blob42.xyz>
Co-authored-by: blob42 <spike@w530>
Co-authored-by: Enrico Shippole <henryshippole@gmail.com>
Co-authored-by: Ibis Prevedello <ibiscp@gmail.com>
Co-authored-by: jped <jonathanped@gmail.com>
Co-authored-by: Justin Torre <justintorre75@gmail.com>
Co-authored-by: Ivan Vendrov <ivan@anthropic.com>
Co-authored-by: Sasmitha Manathunga <70096033+mmz-001@users.noreply.github.com>
Co-authored-by: Ankush Gola <9536492+agola11@users.noreply.github.com>
Co-authored-by: Matt Robinson <mrobinson@unstructuredai.io>
Co-authored-by: Jeff Huber <jeffchuber@gmail.com>
Co-authored-by: Akshay <64036106+akshayvkt@users.noreply.github.com>
Co-authored-by: Andrew Huang <jhuang16888@gmail.com>
Co-authored-by: rogerserper <124558887+rogerserper@users.noreply.github.com>
Co-authored-by: seanaedmiston <seane999@gmail.com>
Co-authored-by: Hasegawa Yuya <52068175+Hase-U@users.noreply.github.com>
Co-authored-by: Ivan Vendrov <ivendrov@gmail.com>
Co-authored-by: Chen Wu (吴尘) <henrychenwu@cmu.edu>
Co-authored-by: Dennis Antela Martinez <dennis.antela@gmail.com>
Co-authored-by: Maxime Vidal <max.vidal@hotmail.fr>
Co-authored-by: Rishabh Raizada <110235735+rishabh-ti@users.noreply.github.com>
2023-02-19 09:53:45 -08:00
CG80499
af8f5c1a49
Added constitutional chain. (#1147)
- Added self-critique constitutional chain based on this
[paper](https://www.anthropic.com/constitutional.pdf).
2023-02-18 19:31:51 -08:00
Harrison Chase
a83ba44efa
Harrison/ver0089 (#1144) 2023-02-18 14:25:37 -08:00
Ankush Gola
7b5e160d28
Make Tools own model, add ToolKit Concept (#1095)
Follow-up of @hinthornw's PR:

- Migrate the Tool abstraction to a separate file (`BaseTool`).
- `Tool` implementation of `BaseTool` takes in function and coroutine to
more easily maintain backwards compatibility
- Add a Toolkit abstraction that can own the generation of tools around
a shared concept or state

---------

Co-authored-by: William FH <13333726+hinthornw@users.noreply.github.com>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Co-authored-by: Francisco Ingham <fpingham@gmail.com>
Co-authored-by: Dhruv Anand <105786647+dhruv-anand-aintech@users.noreply.github.com>
Co-authored-by: cragwolfe <cragcw@gmail.com>
Co-authored-by: Anton Troynikov <atroyn@users.noreply.github.com>
Co-authored-by: Oliver Klingefjord <oliver@klingefjord.com>
Co-authored-by: William Fu-Hinthorn <whinthorn@Williams-MBP-3.attlocal.net>
Co-authored-by: Bruno Bornsztein <bruno.bornsztein@gmail.com>
2023-02-18 13:40:43 -08:00
Harrison Chase
45b5640fe5
fix sql (#1141) 2023-02-18 11:49:08 -08:00
Sam Hogan
85c1449a96
Fix typo in HyDE docs (#1142) 2023-02-18 11:48:46 -08:00
kekayan
9111f4ca8a
fix chatvectordbchain to use pinecone namespace (#1139)
In the similarity search, the pinecone namespace is not used, which
makes the bot return _I don't know_ where the embeddings are stored in
the pinecone namespace. Now we can query by passing the namespace
optionally.
```result = qa({"question": query, "chat_history": chat_history, "namespace":"01gshyhjcfgkq1q5wxjtm17gjh"})```
2023-02-18 10:58:48 -08:00
Harrison Chase
fb3c73d194
add srt loader (#1140) 2023-02-18 10:58:39 -08:00
Francisco Ingham
3f29742adc
Sql alchemy commands used in table info (#1135)
This approach has several advantages:

* it improves the readability of the code
* removes incompatibilities between SQL dialects
* fixes a bug with `datetime` values in rows and `ast.literal_eval`

Huge thanks and credits to @jzluo for finding the weaknesses in the
current approach and for the thoughtful discussion on the best way to
implement this.

---------

Co-authored-by: Francisco Ingham <>
Co-authored-by: Jon Luo <20971593+jzluo@users.noreply.github.com>
2023-02-18 10:58:29 -08:00
Harrison Chase
483821ea3b
fix docs (#1133) 2023-02-18 08:13:54 -08:00
Harrison Chase
ee3590cb61
instruct embeddings docs (#1131) 2023-02-17 16:14:49 -08:00
Noah Gundotra
8c5fbab72d
[Integration Tests] Cast fake embeddings to ALL float values (#1102)
Pydantic validation breaks tests for example (`test_qdrant.py`) because
fake embeddings contain an integer.

This PR casts the embeddings array to all floats.

Now the `qdrant` test passes, `poetry run pytest
tests/integration_tests/vectorstores/test_qdrant.py`
2023-02-17 15:18:09 -08:00
Harrison Chase
d5f3dfa1e1
Harrison/hn loader (#1130)
Co-authored-by: William X <william.y.xuan@gmail.com>
2023-02-17 15:15:02 -08:00
Tom Bocklisch
47c3221fda
Max marginal relecance search fails if there are not enough docs (#1117)
Implementation fails if there are not enough documents. Added the same
check as used for similarity search.

Current implementation raises
```  
File ".venv/lib/python3.9/site-packages/langchain/vectorstores/faiss.py", line 160, in max_marginal_relevance_search
    _id = self.index_to_docstore_id[i]
KeyError: -1
```
2023-02-17 15:12:31 -08:00
Harrison Chase
511d41114f
return source documents for chat vector db chain (#1128) 2023-02-17 13:40:52 -08:00
Jon Luo
c39ef70aa4
fix for database compatibility when getting table DDL (#1129)
#1081 introduced a method to get DDL (table definitions) in a manner
specific to sqlite3, thus breaking compatibility with other non-sqlite3
databases. This uses the sqlite3 command if the detected dialect is
sqlite, and otherwise uses the standard SQL `SHOW CREATE TABLE`. This
should fix #1103.
2023-02-17 13:39:44 -08:00
yakigac
1ed708391e
Fix a bug that shows "KeyError 'items'" (#1118)
Fix KeyError 'items' when no result found.

## Problem

When no result found for a query, google search crashed with `KeyError
'items'`.

## Solution

I added a check for an empty response before accessing the 'items' key.
It will handle the case correctly.

## Other

my twitter: yakigac
(I don't mind even if you don't mention me for this PR. But just because
last time my real name was shout out :) )
2023-02-17 13:04:02 -08:00
Matt Robinson
2bee8d4941
feat: add support for .ppt files in UnstructuredPowerPointLoader (#1124)
###  Summary

Adds support for older `.ppt` file in the PowerPoint loader. 

### Testing

The following should work on `unstructured==0.4.11` using the example
docs from the `unstructured` repo.

```python
from langchain.document_loaders import UnstructuredPowerPointLoader

filename = "../unstructured/example-docs/fake-power-point.pptx"
loader = UnstructuredPowerPointLoader(filename)
loader.load()

filename = "../unstructured/example-docs/fake-power-point.ppt"
loader = UnstructuredPowerPointLoader(filename)
loader.load()
```

Now downgrade `unstructured` to version `0.4.10`. The following should
work:

```python
from langchain.document_loaders import UnstructuredPowerPointLoader

filename = "../unstructured/example-docs/fake-power-point.pptx"
loader = UnstructuredPowerPointLoader(filename)
loader.load()
```

and the following should give you a `ValueError` and invite you to
upgrade `unstructured`.


```python
from langchain.document_loaders import UnstructuredPowerPointLoader

filename = "../unstructured/example-docs/fake-power-point.ppt"
loader = UnstructuredPowerPointLoader(filename)
loader.load()
```
2023-02-17 13:03:25 -08:00
Matt Robinson
b956070f08
docs: add an unstructured section to the ecosystem page (#1125)
### Summary

Adds an Unstructured section to the ecosystem page.
2023-02-17 13:02:23 -08:00
Hasegawa Yuya
383c67c1b2
Fix Issue #1100 (#1101)
https://github.com/hwchase17/langchain/issues/1100
When faiss data and doc.index are created in past versions, error occurs
that say there was no attribute. So I put hasattr in the check as a
simple solution.

However, increasing the number of such checks is not good for
conservatism, so I think there is a better solution.


Also, the code for the batch process was left out, so I put it back in.
2023-02-17 00:53:16 -08:00
Harrison Chase
3f50feb280
fix telegram imports (#1110) 2023-02-17 00:53:01 -08:00
trigaten
6fafcd0a70
Strange behavior with LLM import requirements (#1104)
This import works fine:
```python
from langchain import Anthropic
```
This import does not:
```python
from langchain import AI21
```

```
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
ImportError: cannot import name 'AI21' from 'langchain' (/opt/anaconda3/envs/fed_nlp/lib/python3.9/site-packages/langchain/__init__.py)
```

I think there is a slight documentation inconsistency here:
https://langchain.readthedocs.io/en/latest/reference/modules/llms.html

This PR starts to solve that. Should all the import examples be
`from langchain.llms import X` instead of `from langchain import X`?
2023-02-16 23:13:34 -08:00
Kacper Łukawski
ab1a3cccac
Hotfix: Qdrant content retrieval (revert: #1088) (#1093)
The #1088 introduced a bug in Qdrant integration. That PR reverts those
changes and provides class attributes to ensure consistent payload keys.
In addition to that, an exception will be thrown if any of texts is None
(that could have been an issue reported in #1087)
2023-02-16 12:46:06 -08:00
Harrison Chase
6322b6f657
bump version 0.0.88 (#1090) 2023-02-16 07:32:32 -08:00
Francisco Ingham
3462130e2d
Modify number of types of chains (#1089)
Changed number of types of chains to make it consistent with the rest of
the docs
2023-02-16 07:06:30 -08:00
Rishabh Raizada
5d11e5da40
Update qdrant.py (#1088)
Fixes #1087
2023-02-16 07:06:02 -08:00
Harrison Chase
7745505482
chat qa with sources (#1084) 2023-02-16 00:29:47 -08:00
Harrison Chase
badeeb37b0
fix stuff count (#1083) 2023-02-15 23:57:13 -08:00
Harrison Chase
971458c5de
docs for batch size (#1082) 2023-02-15 23:53:56 -08:00
Harrison Chase
5e10e19bfe
Harrison/align table (#1081)
Co-authored-by: Francisco Ingham <fpingham@gmail.com>
2023-02-15 23:53:37 -08:00
Harrison Chase
c60954d0f8
Harrison/telegram loader (#1080)
Co-authored-by: Maxime Vidal <max.vidal@hotmail.fr>
2023-02-15 23:24:32 -08:00
Dennis Antela Martinez
a1c296bc3c
docs: increase width (#1049)
This addresses #948.

I set the documentation max width to 2560px, but can be adjusted - see
screenshot below.

<img width="1741" alt="Screenshot 2023-02-14 at 13 05 57"
src="https://user-images.githubusercontent.com/23406704/218749076-ea51e90a-a220-4558-b4fe-5a95b39ebf15.png">
2023-02-15 23:07:01 -08:00
Harrison Chase
c96ac3e591
Harrison/semantic subset (#1079)
Co-authored-by: Chen Wu (吴尘) <henrychenwu@cmu.edu>
2023-02-15 23:06:48 -08:00
Harrison Chase
19c2797bed
add anthropic example (#1041)
Co-authored-by: Ivan Vendrov <ivendrov@gmail.com>
Co-authored-by: Sasmitha Manathunga <70096033+mmz-001@users.noreply.github.com>
2023-02-15 23:04:28 -08:00
3ecdea8be4
SearxNG meta search api helper (#854)
This is a work in progress PR to track my progres.

## TODO:

- [x]  Get results using the specifed searx host
- [x]  Prioritize returning an  `answer`  or results otherwise
    - [ ] expose the field `infobox` when available
    - [ ] expose `score` of result to help agent's decision
- [ ] expose the `suggestions` field to agents so they could try new
queries if no results are found with the orignial query ?

- [ ] Dynamic tool description for agents ?
- Searx offers many engines and a search syntax that agents can take
advantage of. It would be nice to generate a dynamic Tool description so
that it can be used many times as a tool but for different purposes.

- [x]  Limit number of results
- [ ]   Implement paging
- [x]  Miror the usage of the Google Search tool
- [x] easy selection of search engines
- [x]  Documentation
    - [ ] update HowTo guide notebook on Search Tools
- [ ] Handle async 
- [ ]  Tests

###  Add examples / documentation on possible uses with
 - [ ]  getting factual answers with `!wiki` option and `infoboxes`
 - [ ]  getting `suggestions`
 - [ ]  getting `corrections`

---------

Co-authored-by: blob42 <spike@w530>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-02-15 23:03:57 -08:00
Hasegawa Yuya
e08961ab25
Fixed openai embeddings to be safe by batching them based on token size calculation. (#991)
I modified the logic of the batch calculation for embedding according to
this cookbook

https://github.com/openai/openai-cookbook/blob/main/examples/Embedding_long_inputs.ipynb
2023-02-15 23:02:32 -08:00
seanaedmiston
f0a258555b
Support similarity search by vector (in FAISS) (#961)
Alternate implementation to PR #960 Again - only FAISS is implemented.
If accepted can add this to other vectorstores or leave as
NotImplemented? Suggestions welcome...
2023-02-15 22:50:00 -08:00
Jonathan Pedoeem
05ad399abe
Update PromptLayerOpenAI LLM to include support for ASYNC API (#1066)
This PR updates `PromptLayerOpenAI` to now support requests using the
[Async
API](https://langchain.readthedocs.io/en/latest/modules/llms/async_llm.html)
It also updates the documentation on Async API to let users know that
PromptLayerOpenAI also supports this.

`PromptLayerOpenAI` now redefines `_agenerate` a similar was to how it
redefines `_generate`
2023-02-15 22:48:09 -08:00
Harrison Chase
98186ef180
Harrison/evernote nb (#1078)
Co-authored-by: Akshay <64036106+akshayvkt@users.noreply.github.com>
2023-02-15 22:47:30 -08:00
rogerserper
e46cd3b7db
Google Search API integration with serper.dev (wrapper, tests, docs, … (#909)
Adds Google Search integration with [Serper](https://serper.dev) a
low-cost alternative to SerpAPI (10x cheaper + generous free tier).
Includes documentation, tests and examples. Hopefully I am not missing
anything.

Developers can sign up for a free account at
[serper.dev](https://serper.dev) and obtain an api key.

## Usage

```python
from langchain.utilities import GoogleSerperAPIWrapper
from langchain.llms.openai import OpenAI
from langchain.agents import initialize_agent, Tool

import os
os.environ["SERPER_API_KEY"] = ""
os.environ['OPENAI_API_KEY'] = ""

llm = OpenAI(temperature=0)
search = GoogleSerperAPIWrapper()
tools = [
    Tool(
        name="Intermediate Answer",
        func=search.run
    )
]

self_ask_with_search = initialize_agent(tools, llm, agent="self-ask-with-search", verbose=True)
self_ask_with_search.run("What is the hometown of the reigning men's U.S. Open champion?")
```

### Output
```
Entering new AgentExecutor chain...
 Yes.
Follow up: Who is the reigning men's U.S. Open champion?
Intermediate answer: Current champions Carlos Alcaraz, 2022 men's singles champion.
Follow up: Where is Carlos Alcaraz from?
Intermediate answer: El Palmar, Spain
So the final answer is: El Palmar, Spain

> Finished chain.

'El Palmar, Spain'
```
2023-02-15 22:47:17 -08:00
Harrison Chase
52753066ef
Harrison/handle stop tokens ai21 (#1077)
Co-authored-by: Andrew Huang <jhuang16888@gmail.com>
2023-02-15 22:44:55 -08:00
Akshay
d8ed286200
Update and rename everynote.py to evernote.py (#1060)
Updating this base file as well as the .ipynb file of the example on the
website:

https://github.com/hwchase17/langchain/compare/master...akshayvkt:langchain:patch-1

https://langchain.readthedocs.io/en/latest/modules/document_loaders/examples/everynote.html
2023-02-15 22:41:42 -08:00
Jeff Huber
34cba2da32
Fix typo in integration with Chroma (#1070)
We introduced a breaking change but missed this call. This PR fixes
`langchain` to work with upstream `chroma`.
2023-02-15 22:37:58 -08:00
Jonathan Pedoeem
05df480376
Update PromptLayerOpenAI LLM usage instructions in documentation (#1053)
This PR updates the usage instructions for PromptLayerOpenAI in
Langchain's documentation. The updated instructions provide more detail
and conform better to the style of other LLM integration documentation
pages.

No code changes were made in this PR, only improvements to the
documentation. This update will make it easier for users to understand
how to use `PromptLayerOpenAI`
2023-02-15 22:37:48 -08:00
Matt Robinson
3ea1e5af1e
feat: added element metadata to unstructured loader (#1068)
### Summary

Adds tracked metadata from `unstructured` elements to the document
metadata when `UnstructuredFileLoader` is used in `"elements"` mode.
Tracked metadata is available in `unstructured>=0.4.9`, but the code is
written for backward compatibility with older `unstructured` versions.

### Testing

Before running, make sure to upgrade to `unstructured==0.4.9`. In the
code snippet below, you should see `page_number`, `filename`, and
`category` in the metadata for each document. `doc[0]` should have
`page_number: 1` and `doc[-1]` should have `page_number: 2`. The example
document is `layout-parser-paper-fast.pdf` from the [`unstructured`
sample
docs](https://github.com/Unstructured-IO/unstructured/tree/main/example-docs).

```python
from langchain.document_loaders import UnstructuredFileLoader
loader = UnstructuredFileLoader(file_path=f"layout-parser-paper-fast.pdf", mode="elements")
docs = loader.load()
```
2023-02-15 22:36:18 -08:00
Harrison Chase
bac676c8e7
bump version (#1057) 2023-02-15 07:09:10 -08:00
Ankush Gola
d8ac274fc2
add to async chain notebook (#1056) 2023-02-14 18:20:38 -08:00
Ankush Gola
caa8e4742e
Enable streaming for OpenAI LLM (#986)
* Support a callback `on_llm_new_token` that users can implement when
`OpenAI.streaming` is set to `True`
2023-02-14 15:06:14 -08:00
Harrison Chase
f05f025e41
bump version to 0086 (#1050) 2023-02-14 07:14:40 -08:00
Sasmitha Manathunga
c67c5383fd
docs: fix typo in notebook (#1046) 2023-02-14 07:06:08 -08:00
Harrison Chase
88bebb4caa
Harrison/llm integrations (#1039)
Co-authored-by: jped <jonathanped@gmail.com>
Co-authored-by: Justin Torre <justintorre75@gmail.com>
Co-authored-by: Ivan Vendrov <ivan@anthropic.com>
2023-02-13 22:06:25 -08:00
Harrison Chase
ec727bf166
Align table info (#999) (#1034)
Currently the chain is getting the column names and types on the one
side and the example rows on the other. It is easier for the llm to read
the table information if the column name and examples are shown together
so that it can easily understand to which columns do the examples refer
to. For an instantiation of this, please refer to the changes in the
`sqlite.ipynb` notebook.

Also changed `eval` for `ast.literal_eval` when interpreting the results
from the sample row query since it is a better practice.

---------

Co-authored-by: Francisco Ingham <>

---------

Co-authored-by: Francisco Ingham <fpingham@gmail.com>
2023-02-13 21:48:41 -08:00
Harrison Chase
8c45f06d58
Harrison/standarize prompt loading (#1036)
Co-authored-by: Ibis Prevedello <ibiscp@gmail.com>
2023-02-13 21:48:09 -08:00
Enrico Shippole
f30dcc6359
Add GooseAI, CerebriumAI, Petals, ForefrontAI (#981)
Add GooseAI, CerebriumAI, Petals, ForefrontAI
2023-02-13 21:20:19 -08:00
Anton Troynikov
d43d430d86
Chroma persistence (#1028)
This PR adds persistence to the Chroma vector store.

Users can supply a `persist_directory` with any of the `Chroma` creation
methods. If supplied, the store will be automatically persisted at that
directory.

If a user creates a new `Chroma` instance with the same persistence
directory, it will get loaded up automatically. If they use `from_texts`
or `from_documents` in this way, the documents will be loaded into the
existing store.

There is the chance of some funky behavior if the user passes a
different embedding function from the one used to create the collection
- we will make this easier in future updates. For now, we log a warning.
2023-02-13 21:09:06 -08:00
Harrison Chase
012a6dfb16
Harrison/makefile (#1033)
Co-authored-by: blob42 <contact@blob42.xyz>
Co-authored-by: blob42 <spike@w530>
2023-02-13 21:08:47 -08:00
333 changed files with 21332 additions and 2698 deletions

144
.dockerignore Normal file
View File

@ -0,0 +1,144 @@
.vscode/
.idea/
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class
# C extensions
*.so
# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
pip-wheel-metadata/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST
# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
*.py,cover
.hypothesis/
.pytest_cache/
# Translations
*.mo
*.pot
# Django stuff:
*.log
local_settings.py
db.sqlite3
db.sqlite3-journal
# Flask stuff:
instance/
.webassets-cache
# Scrapy stuff:
.scrapy
# Sphinx documentation
docs/_build/
# PyBuilder
target/
# Jupyter Notebook
.ipynb_checkpoints
notebooks/
# IPython
profile_default/
ipython_config.py
# pyenv
.python-version
# pipenv
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
# However, in case of collaboration, if having platform-specific dependencies or dependencies
# having no cross-platform support, pipenv may install dependencies that don't work, or not
# install all needed dependencies.
#Pipfile.lock
# PEP 582; used by e.g. github.com/David-OConnor/pyflow
__pypackages__/
# Celery stuff
celerybeat-schedule
celerybeat.pid
# SageMath parsed files
*.sage.py
# Environments
.env
.venv
.venvs
env/
venv/
ENV/
env.bak/
venv.bak/
# Spyder project settings
.spyderproject
.spyproject
# Rope project settings
.ropeproject
# mkdocs documentation
/site
# mypy
.mypy_cache/
.dmypy.json
dmypy.json
# Pyre type checker
.pyre/
# macOS display setting files
.DS_Store
# docker
docker/
!docker/assets/
.dockerignore
docker.build

View File

@ -31,4 +31,4 @@ jobs:
run: poetry install
- name: Run unit tests
run: |
make tests
make test

2
.gitignore vendored
View File

@ -106,6 +106,7 @@ celerybeat.pid
# Environments
.env
!docker/.env
.venv
.venvs
env/
@ -134,3 +135,4 @@ dmypy.json
# macOS display setting files
.DS_Store
docker.build

View File

@ -77,6 +77,8 @@ Now, you should be able to run the common tasks in the following section.
## ✅Common Tasks
Type `make` for a list of common tasks.
### Code Formatting
Formatting for this project is done via a combination of [Black](https://black.readthedocs.io/en/stable/) and [isort](https://pycqa.github.io/isort/).
@ -116,7 +118,7 @@ Unit tests cover modular logic that does not require calls to outside APIs.
To run unit tests:
```bash
make tests
make test
```
If you add new logic, please add a unit test.
@ -149,6 +151,10 @@ poetry run jupyter notebook
When you run `poetry install`, the `langchain` package is installed as editable in the virtualenv, so your new logic can be imported into the notebook.
## Using Docker
Refer to [DOCKER.md](docker/DOCKER.md) for more information.
## Documentation
### Contribute Documentation

View File

@ -1,4 +1,9 @@
.PHONY: format lint tests tests_watch integration_tests
.PHONY: all clean format lint test tests test_watch integration_tests help
GIT_HASH ?= $(shell git rev-parse --short HEAD)
LANGCHAIN_VERSION := $(shell grep '^version' pyproject.toml | cut -d '=' -f2 | tr -d '"')
all: help
coverage:
poetry run pytest --cov \
@ -6,6 +11,8 @@ coverage:
--cov-report xml \
--cov-report term-missing:skip-covered
clean: docs_clean
docs_build:
cd docs && poetry run make html
@ -17,19 +24,50 @@ docs_linkcheck:
format:
poetry run black .
poetry run isort .
poetry run ruff --select I --fix .
lint:
poetry run mypy .
poetry run black . --check
poetry run isort . --check
poetry run flake8 .
poetry run ruff .
tests:
test:
poetry run pytest tests/unit_tests
tests_watch:
tests: test
test_watch:
poetry run ptw --now . -- tests/unit_tests
integration_tests:
poetry run pytest tests/integration_tests
help:
@echo '----'
@echo 'coverage - run unit tests and generate coverage report'
@echo 'docs_build - build the documentation'
@echo 'docs_clean - clean the documentation build artifacts'
@echo 'docs_linkcheck - run linkchecker on the documentation'
ifneq ($(shell command -v docker 2> /dev/null),)
@echo 'docker - build and run the docker dev image'
@echo 'docker.run - run the docker dev image'
@echo 'docker.jupyter - start a jupyter notebook inside container'
@echo 'docker.build - build the docker dev image'
@echo 'docker.force_build - force a rebuild'
@echo 'docker.test - run the unit tests in docker'
@echo 'docker.lint - run the linters in docker'
@echo 'docker.clean - remove the docker dev image'
endif
@echo 'format - run code formatters'
@echo 'lint - run linters'
@echo 'test - run unit tests'
@echo 'test_watch - run unit tests in watch mode'
@echo 'integration_tests - run integration tests'
# include the following makefile if the docker executable is available
ifeq ($(shell command -v docker 2> /dev/null),)
$(info Docker not found, skipping docker-related targets)
else
include docker/Makefile
endif

View File

@ -1,11 +1,15 @@
# 🦜️🔗 LangChain
# 🦜️🔗 LangChain - Docker
⚡ Building applications with LLMs through composability ⚡
WIP: This is a fork of langchain focused on implementing a docker warpper and
toolchain. The goal is to make it easy to use LLM chains running inside a
container, build custom docker based tools and let agents run arbitrary
untrusted code inside.
[![lint](https://github.com/hwchase17/langchain/actions/workflows/lint.yml/badge.svg)](https://github.com/hwchase17/langchain/actions/workflows/lint.yml) [![test](https://github.com/hwchase17/langchain/actions/workflows/test.yml/badge.svg)](https://github.com/hwchase17/langchain/actions/workflows/test.yml) [![linkcheck](https://github.com/hwchase17/langchain/actions/workflows/linkcheck.yml/badge.svg)](https://github.com/hwchase17/langchain/actions/workflows/linkcheck.yml) [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) [![Twitter](https://img.shields.io/twitter/url/https/twitter.com/langchainai.svg?style=social&label=Follow%20%40LangChainAI)](https://twitter.com/langchainai) [![](https://dcbadge.vercel.app/api/server/6adMQxSpJS?compact=true&style=flat)](https://discord.gg/6adMQxSpJS)
Currently exploring the following:
**Production Support:** As you move your LangChains into production, we'd love to offer more comprehensive support.
Please fill out [this form](https://forms.gle/57d8AmXBYp8PP8tZA) and we'll set up a dedicated support Slack channel.
- Docker wrapper for LLMs and chains
- Creating a toolchain for building docker based LLM tools.
- Building agents that can run arbitrary untrusted code inside a container.
## Quick Install

13
docker/.env Normal file
View File

@ -0,0 +1,13 @@
# python env
PYTHON_VERSION=3.10
# -E flag is required
# comment the following line to only install dev dependencies
POETRY_EXTRA_PACKAGES="-E all"
# at least one group needed
POETRY_DEPENDENCIES="dev,test,lint,typing"
# langchain env. warning: these variables will be baked into the docker image !
OPENAI_API_KEY=${OPENAI_API_KEY:-}
SERPAPI_API_KEY=${SERPAPI_API_KEY:-}

53
docker/DOCKER.md Normal file
View File

@ -0,0 +1,53 @@
# Using Docker
To quickly get started, run the command `make docker`.
If docker is installed the Makefile will export extra targets in the fomrat `docker.*` to build and run the docker image. Type `make` for a list of available tasks.
There is a basic `docker-compose.yml` in the docker directory.
## Building the development image
Using `make docker` will build the dev image if it does not exist, then drops
you inside the container with the langchain environment available in the shell.
### Customizing the image and installed dependencies
The image is built with a default python version and all extras and dev
dependencies. It can be customized by changing the variables in the [.env](/docker/.env)
file.
If you don't need all the `extra` dependencies a slimmer image can be obtained by
commenting out `POETRY_EXTRA_PACKAGES` in the [.env](docker/.env) file.
### Image caching
The Dockerfile is optimized to cache the poetry install step. A rebuild is triggered when there a change to the source code.
## Example Usage
All commands from langchain's python environment are available by default in the container.
A few examples:
```bash
# run jupyter notebook
docker run --rm -it IMG jupyter notebook
# run ipython
docker run --rm -it IMG ipython
# start web server
docker run --rm -p 8888:8888 IMG python -m http.server 8888
```
## Testing / Linting
Tests and lints are run using your local source directory that is mounted on the volume /src.
Run unit tests in the container with `make docker.test`.
Run the linting and formatting checks with `make docker.lint`.
Note: this task can run in parallel using `make -j4 docker.lint`.

104
docker/Dockerfile Normal file
View File

@ -0,0 +1,104 @@
# vim: ft=dockerfile
#
# see also: https://github.com/python-poetry/poetry/discussions/1879
# - with https://github.com/bneijt/poetry-lock-docker
# see https://github.com/thehale/docker-python-poetry
# see https://github.com/max-pfeiffer/uvicorn-poetry
# use by default the slim version of python
ARG PYTHON_IMAGE_TAG=slim
ARG PYTHON_VERSION=${PYTHON_VERSION:-3.11.2}
####################
# Base Environment
####################
FROM python:$PYTHON_VERSION-$PYTHON_IMAGE_TAG AS lchain-base
ARG UID=1000
ARG USERNAME=lchain
ENV USERNAME=$USERNAME
RUN groupadd -g ${UID} $USERNAME
RUN useradd -l -m -u ${UID} -g ${UID} $USERNAME
# used for mounting source code
RUN mkdir /src
VOLUME /src
#######################
## Poetry Builder Image
#######################
FROM lchain-base AS lchain-base-builder
ARG POETRY_EXTRA_PACKAGES=$POETRY_EXTRA_PACKAGES
ARG POETRY_DEPENDENCIES=$POETRY_DEPENDENCIES
ENV HOME=/root
ENV POETRY_HOME=/root/.poetry
ENV POETRY_VIRTUALENVS_IN_PROJECT=false
ENV POETRY_NO_INTERACTION=1
ENV CACHE_DIR=$HOME/.cache
ENV POETRY_CACHE_DIR=$CACHE_DIR/pypoetry
ENV PATH="$POETRY_HOME/bin:$PATH"
WORKDIR /root
RUN apt-get update && \
apt-get install -y \
build-essential \
git \
curl
SHELL ["/bin/bash", "-o", "pipefail", "-c"]
RUN mkdir -p $CACHE_DIR
## setup poetry
RUN curl -sSL -o $CACHE_DIR/pypoetry-installer.py https://install.python-poetry.org/
RUN python3 $CACHE_DIR/pypoetry-installer.py
# # Copy poetry files
COPY poetry.* pyproject.toml ./
RUN mkdir /pip-prefix
RUN poetry export $POETRY_EXTRA_PACKAGES --with $POETRY_DEPENDENCIES -f requirements.txt --output requirements.txt --without-hashes && \
pip install --no-cache-dir --disable-pip-version-check --prefix /pip-prefix -r requirements.txt
# add custom motd message
COPY docker/assets/etc/motd /tmp/motd
RUN cat /tmp/motd > /etc/motd
RUN printf "\n%s\n%s\n" "$(poetry version)" "$(python --version)" >> /etc/motd
###################
## Runtime Image
###################
FROM lchain-base AS lchain
#jupyter port
EXPOSE 8888
COPY docker/assets/entry.sh /entry
RUN chmod +x /entry
COPY --from=lchain-base-builder /etc/motd /etc/motd
COPY --from=lchain-base-builder /usr/bin/git /usr/bin/git
USER ${USERNAME:-lchain}
ENV HOME /home/$USERNAME
WORKDIR /home/$USERNAME
COPY --chown=lchain:lchain --from=lchain-base-builder /pip-prefix $HOME/.local/
COPY . .
SHELL ["/bin/bash", "-o", "pipefail", "-c"]
RUN pip install --no-deps --disable-pip-version-check --no-cache-dir -e .
entrypoint ["/entry"]

84
docker/Makefile Normal file
View File

@ -0,0 +1,84 @@
#do not call this makefile it is included in the main Makefile
.PHONY: docker docker.jupyter docker.run docker.force_build docker.clean \
docker.test docker.lint docker.lint.mypy docker.lint.black \
docker.lint.isort docker.lint.flake
# read python version from .env file ignoring comments
PYTHON_VERSION := $(shell grep PYTHON_VERSION docker/.env | cut -d '=' -f2)
POETRY_EXTRA_PACKAGES := $(shell grep '^[^#]*POETRY_EXTRA_PACKAGES' docker/.env | cut -d '=' -f2)
POETRY_DEPENDENCIES := $(shell grep 'POETRY_DEPENDENCIES' docker/.env | cut -d '=' -f2)
DOCKER_SRC := $(shell find docker -type f)
DOCKER_IMAGE_NAME = langchain/dev
# SRC is all files matched by the git ls-files command
SRC := $(shell git ls-files -- '*' ':!:docker/*')
# set DOCKER_BUILD_PROGRESS=plain to see detailed build progress
DOCKER_BUILD_PROGRESS ?= auto
# extra message to show when entering the docker container
DOCKER_MOTD := docker/assets/etc/motd
ROOTDIR := $(shell git rev-parse --show-toplevel)
DOCKER_LINT_CMD = docker run --rm -i -u lchain -v $(ROOTDIR):/src $(DOCKER_IMAGE_NAME):$(GIT_HASH)
docker: docker.run
docker.run: docker.build
@echo "Docker image: $(DOCKER_IMAGE_NAME):$(GIT_HASH)"
docker run --rm -it -u lchain -v $(ROOTDIR):/src $(DOCKER_IMAGE_NAME):$(GIT_HASH)
docker.jupyter: docker.build
docker run --rm -it -v $(ROOTDIR):/src $(DOCKER_IMAGE_NAME):$(GIT_HASH) jupyter notebook
docker.build: $(SRC) $(DOCKER_SRC) $(DOCKER_MOTD)
ifdef $(DOCKER_BUILDKIT)
docker buildx build --build-arg PYTHON_VERSION=$(PYTHON_VERSION) \
--build-arg POETRY_EXTRA_PACKAGES=$(POETRY_EXTRA_PACKAGES) \
--build-arg POETRY_DEPENDENCIES=$(POETRY_DEPENDENCIES) \
--progress=$(DOCKER_BUILD_PROGRESS) \
$(BUILD_FLAGS) -f docker/Dockerfile -t $(DOCKER_IMAGE_NAME):$(GIT_HASH) .
else
docker build --build-arg PYTHON_VERSION=$(PYTHON_VERSION) \
--build-arg POETRY_EXTRA_PACKAGES=$(POETRY_EXTRA_PACKAGES) \
--build-arg POETRY_DEPENDENCIES=$(POETRY_DEPENDENCIES) \
$(BUILD_FLAGS) -f docker/Dockerfile -t $(DOCKER_IMAGE_NAME):$(GIT_HASH) .
endif
docker tag $(DOCKER_IMAGE_NAME):$(GIT_HASH) $(DOCKER_IMAGE_NAME):latest
@touch $@ # this prevents docker from rebuilding dependencies that have not
@ # changed. Remove the file `docker/docker.build` to force a rebuild.
docker.force_build: $(DOCKER_SRC)
@rm -f docker.build
@$(MAKE) docker.build BUILD_FLAGS=--no-cache
docker.clean:
docker rmi $(DOCKER_IMAGE_NAME):$(GIT_HASH) $(DOCKER_IMAGE_NAME):latest
docker.test: docker.build
docker run --rm -it -u lchain -v $(ROOTDIR):/src $(DOCKER_IMAGE_NAME):$(GIT_HASH) \
pytest /src/tests/unit_tests
# this assumes that the docker image has been built
docker.lint: docker.lint.mypy docker.lint.black docker.lint.isort \
docker.lint.flake
# these can run in parallel with -j[njobs]
docker.lint.mypy:
@$(DOCKER_LINT_CMD) mypy /src
@printf "\t%s\n" "mypy ... "
docker.lint.black:
@$(DOCKER_LINT_CMD) black /src --check
@printf "\t%s\n" "black ... "
docker.lint.isort:
@$(DOCKER_LINT_CMD) isort /src --check
@printf "\t%s\n" "isort ... "
docker.lint.flake:
@$(DOCKER_LINT_CMD) flake8 /src
@printf "\t%s\n" "flake8 ... "

10
docker/assets/entry.sh Normal file
View File

@ -0,0 +1,10 @@
#!/usr/bin/env bash
export PATH=$HOME/.local/bin:$PATH
if [ -z "$1" ]; then
cat /etc/motd
exec /bin/bash
fi
exec "$@"

8
docker/assets/etc/motd Normal file
View File

@ -0,0 +1,8 @@
All dependencies have been installed in the current shell. There is no
virtualenv or a need for `poetry` inside the container.
Running the command `make docker.run` at the root directory of the project will
build the container the first time. On the next runs it will use the cached
image. A rebuild will happen when changes are made to the source code.
You local source directory has been mounted to the /src directory.

17
docker/docker-compose.yml Normal file
View File

@ -0,0 +1,17 @@
version: "3.7"
services:
langchain:
hostname: langchain
image: langchain/dev:latest
build:
context: ../
dockerfile: docker/Dockerfile
args:
PYTHON_VERSION: ${PYTHON_VERSION}
POETRY_EXTRA_PACKAGES: ${POETRY_EXTRA_PACKAGES}
POETRY_DEPENDENCIES: ${POETRY_DEPENDENCIES}
restart: unless-stopped
ports:
- 127.0.0.1:8888:8888

BIN
docs/_static/HeliconeDashboard.png vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 235 KiB

BIN
docs/_static/HeliconeKeys.png vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 148 KiB

View File

@ -1,3 +1,13 @@
pre {
white-space: break-spaces;
}
@media (min-width: 1200px) {
.container,
.container-lg,
.container-md,
.container-sm,
.container-xl {
max-width: 2560px !important;
}
}

25
docs/ecosystem/atlas.md Normal file
View File

@ -0,0 +1,25 @@
# AtlasDB
This page covers how to Nomic's Atlas ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific Atlas wrappers.
## Installation and Setup
- Install the Python package with `pip install nomic`
- Nomic is also included in langchains poetry extras `poetry install -E all`
-
## Wrappers
### VectorStore
There exists a wrapper around the Atlas neural database, allowing you to use it as a vectorstore.
This vectorstore also gives you full access to the underlying AtlasProject object, which will allow you to use the full range of Atlas map interactions, such as bulk tagging and automatic topic modeling.
Please see [the Nomic docs](https://docs.nomic.ai/atlas_api.html) for more detailed information.
To import this vectorstore:
```python
from langchain.vectorstores import AtlasDB
```
For a more detailed walkthrough of the Chroma wrapper, see [this notebook](../modules/indexes/examples/vectorstores.ipynb)

View File

@ -0,0 +1,79 @@
# Banana
This page covers how to use the Banana ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific Banana wrappers.
## Installation and Setup
- Install with `pip3 install banana-dev`
- Get an Banana api key and set it as an environment variable (`BANANA_API_KEY`)
## Define your Banana Template
If you want to use an available language model template you can find one [here](https://app.banana.dev/templates/conceptofmind/serverless-template-palmyra-base).
This template uses the Palmyra-Base model by [Writer](https://writer.com/product/api/).
You can check out an example Banana repository [here](https://github.com/conceptofmind/serverless-template-palmyra-base).
## Build the Banana app
Banana Apps must include the "output" key in the return json.
There is a rigid response structure.
```python
# Return the results as a dictionary
result = {'output': result}
```
An example inference function would be:
```python
def inference(model_inputs:dict) -> dict:
global model
global tokenizer
# Parse out your arguments
prompt = model_inputs.get('prompt', None)
if prompt == None:
return {'message': "No prompt provided"}
# Run the model
input_ids = tokenizer.encode(prompt, return_tensors='pt').cuda()
output = model.generate(
input_ids,
max_length=100,
do_sample=True,
top_k=50,
top_p=0.95,
num_return_sequences=1,
temperature=0.9,
early_stopping=True,
no_repeat_ngram_size=3,
num_beams=5,
length_penalty=1.5,
repetition_penalty=1.5,
bad_words_ids=[[tokenizer.encode(' ', add_prefix_space=True)[0]]]
)
result = tokenizer.decode(output[0], skip_special_tokens=True)
# Return the results as a dictionary
result = {'output': result}
return result
```
You can find a full example of a Banana app [here](https://github.com/conceptofmind/serverless-template-palmyra-base/blob/main/app.py).
## Wrappers
### LLM
There exists an Banana LLM wrapper, which you can access with
```python
from langchain.llms import Banana
```
You need to provide a model key located in the dashboard:
```python
llm = Banana(model_key="YOUR_MODEL_KEY")
```

View File

@ -0,0 +1,17 @@
# CerebriumAI
This page covers how to use the CerebriumAI ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific CerebriumAI wrappers.
## Installation and Setup
- Install with `pip install cerebrium`
- Get an CerebriumAI api key and set it as an environment variable (`CEREBRIUMAI_API_KEY`)
## Wrappers
### LLM
There exists an CerebriumAI LLM wrapper, which you can access with
```python
from langchain.llms import CerebriumAI
```

View File

@ -17,4 +17,4 @@ To import this vectorstore:
from langchain.vectorstores import Chroma
```
For a more detailed walkthrough of the Chroma wrapper, see [this notebook](../modules/utils/combine_docs_examples/vectorstores.ipynb)
For a more detailed walkthrough of the Chroma wrapper, see [this notebook](../modules/indexes/examples/vectorstores.ipynb)

View File

@ -22,4 +22,4 @@ There exists an Cohere Embeddings wrapper, which you can access with
```python
from langchain.embeddings import CohereEmbeddings
```
For a more detailed walkthrough of this, see [this notebook](../modules/utils/combine_docs_examples/embeddings.ipynb)
For a more detailed walkthrough of this, see [this notebook](../modules/indexes/examples/embeddings.ipynb)

View File

@ -0,0 +1,17 @@
# DeepInfra
This page covers how to use the DeepInfra ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific DeepInfra wrappers.
## Installation and Setup
- Get your DeepInfra api key from this link [here](https://deepinfra.com/).
- Get an DeepInfra api key and set it as an environment variable (`DEEPINFRA_API_TOKEN`)
## Wrappers
### LLM
There exists an DeepInfra LLM wrapper, which you can access with
```python
from langchain.llms import DeepInfra
```

View File

@ -0,0 +1,25 @@
# Deep Lake
This page covers how to use the Deep Lake ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific Deep Lake wrappers. For more information.
1. Here is [whitepaper](https://www.deeplake.ai/whitepaper) and [academic paper](https://arxiv.org/pdf/2209.10785.pdf) for Deep Lake
2. Here is a set of additional resources available for review: [Deep Lake](https://github.com/activeloopai/deeplake), [Getting Started](https://docs.activeloop.ai/getting-started) and [Tutorials](https://docs.activeloop.ai/hub-tutorials)
## Installation and Setup
- Install the Python package with `pip install deeplake`
## Wrappers
### VectorStore
There exists a wrapper around Deep Lake, a data lake for Deep Learning applications, allowing you to use it as a vectorstore (for now), whether for semantic search or example selection.
To import this vectorstore:
```python
from langchain.vectorstores import DeepLake
```
For a more detailed walkthrough of the Deep Lake wrapper, see [this notebook](../modules/indexes/vectorstore_examples/deeplake.ipynb)

View File

@ -0,0 +1,16 @@
# ForefrontAI
This page covers how to use the ForefrontAI ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific ForefrontAI wrappers.
## Installation and Setup
- Get an ForefrontAI api key and set it as an environment variable (`FOREFRONTAI_API_KEY`)
## Wrappers
### LLM
There exists an ForefrontAI LLM wrapper, which you can access with
```python
from langchain.llms import ForefrontAI
```

View File

@ -0,0 +1,71 @@
# Google Serper Wrapper
This page covers how to use the [Serper](https://serper.dev) Google Search API within LangChain. Serper is a low-cost Google Search API that can be used to add answer box, knowledge graph, and organic results data from Google Search.
It is broken into two parts: setup, and then references to the specific Google Serper wrapper.
## Setup
- Go to [serper.dev](https://serper.dev) to sign up for a free account
- Get the api key and set it as an environment variable (`SERPER_API_KEY`)
## Wrappers
### Utility
There exists a GoogleSerperAPIWrapper utility which wraps this API. To import this utility:
```python
from langchain.utilities import GoogleSerperAPIWrapper
```
You can use it as part of a Self Ask chain:
```python
from langchain.utilities import GoogleSerperAPIWrapper
from langchain.llms.openai import OpenAI
from langchain.agents import initialize_agent, Tool
import os
os.environ["SERPER_API_KEY"] = ""
os.environ['OPENAI_API_KEY'] = ""
llm = OpenAI(temperature=0)
search = GoogleSerperAPIWrapper()
tools = [
Tool(
name="Intermediate Answer",
func=search.run
)
]
self_ask_with_search = initialize_agent(tools, llm, agent="self-ask-with-search", verbose=True)
self_ask_with_search.run("What is the hometown of the reigning men's U.S. Open champion?")
```
#### Output
```
Entering new AgentExecutor chain...
Yes.
Follow up: Who is the reigning men's U.S. Open champion?
Intermediate answer: Current champions Carlos Alcaraz, 2022 men's singles champion.
Follow up: Where is Carlos Alcaraz from?
Intermediate answer: El Palmar, Spain
So the final answer is: El Palmar, Spain
> Finished chain.
'El Palmar, Spain'
```
For a more detailed walkthrough of this wrapper, see [this notebook](../modules/utils/examples/google_serper.ipynb).
### Tool
You can also easily load this wrapper as a Tool (to use with an Agent).
You can do this with:
```python
from langchain.agents import load_tools
tools = load_tools(["google-serper"])
```
For more information on this, see [this page](../modules/agents/tools.md)

23
docs/ecosystem/gooseai.md Normal file
View File

@ -0,0 +1,23 @@
# GooseAI
This page covers how to use the GooseAI ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific GooseAI wrappers.
## Installation and Setup
- Install the Python SDK with `pip install openai`
- Get your GooseAI api key from this link [here](https://goose.ai/).
- Set the environment variable (`GOOSEAI_API_KEY`).
```python
import os
os.environ["GOOSEAI_API_KEY"] = "YOUR_API_KEY"
```
## Wrappers
### LLM
There exists an GooseAI LLM wrapper, which you can access with:
```python
from langchain.llms import GooseAI
```

View File

@ -0,0 +1,38 @@
# Graphsignal
This page covers how to use the Graphsignal to trace and monitor LangChain.
## Installation and Setup
- Install the Python library with `pip install graphsignal`
- Create free Graphsignal account [here](https://graphsignal.com)
- Get an API key and set it as an environment variable (`GRAPHSIGNAL_API_KEY`)
## Tracing and Monitoring
Graphsignal automatically instruments and starts tracing and monitoring chains. Traces, metrics and errors are then available in your [Graphsignal dashboard](https://app.graphsignal.com/). No prompts or other sensitive data are sent to Graphsignal cloud, only statistics and metadata.
Initialize the tracer by providing a deployment name:
```python
import graphsignal
graphsignal.configure(deployment='my-langchain-app-prod')
```
In order to trace full runs and see a breakdown by chains and tools, you can wrap the calling routine or use a decorator:
```python
with graphsignal.start_trace('my-chain'):
chain.run("some initial text")
```
Optionally, enable profiling to record function-level statistics for each trace.
```python
with graphsignal.start_trace(
'my-chain', options=graphsignal.TraceOptions(enable_profiling=True)):
chain.run("some initial text")
```
See the [Quick Start](https://graphsignal.com/docs/guides/quick-start/) guide for complete setup instructions.

View File

@ -0,0 +1,53 @@
# Helicone
This page covers how to use the [Helicone](https://helicone.ai) within LangChain.
## What is Helicone?
Helicone is an [open source](https://github.com/Helicone/helicone) observability platform that proxies your OpenAI traffic and provides you key insights into your spend, latency and usage.
![Helicone](../_static/HeliconeDashboard.png)
## Quick start
With your LangChain environment you can just add the following parameter.
```bash
export OPENAI_API_BASE="https://oai.hconeai.com/v1"
```
Now head over to [helicone.ai](https://helicone.ai/onboarding?step=2) to create your account, and add your OpenAI API key within our dashboard to view your logs.
![Helicone](../_static/HeliconeKeys.png)
## How to enable Helicone caching
```python
from langchain.llms import OpenAI
import openai
openai.api_base = "https://oai.hconeai.com/v1"
llm = OpenAI(temperature=0.9, headers={"Helicone-Cache-Enabled": "true"})
text = "What is a helicone?"
print(llm(text))
```
[Helicone caching docs](https://docs.helicone.ai/advanced-usage/caching)
## How to use Helicone custom properties
```python
from langchain.llms import OpenAI
import openai
openai.api_base = "https://oai.hconeai.com/v1"
llm = OpenAI(temperature=0.9, headers={
"Helicone-Property-Session": "24",
"Helicone-Property-Conversation": "support_issue_2",
"Helicone-Property-App": "mobile",
})
text = "What is a helicone?"
print(llm(text))
```
[Helicone property docs](https://docs.helicone.ai/advanced-usage/custom-properties)

View File

@ -47,7 +47,7 @@ To use a the wrapper for a model hosted on Hugging Face Hub:
```python
from langchain.embeddings import HuggingFaceHubEmbeddings
```
For a more detailed walkthrough of this, see [this notebook](../modules/utils/combine_docs_examples/embeddings.ipynb)
For a more detailed walkthrough of this, see [this notebook](../modules/indexes/examples/embeddings.ipynb)
### Tokenizer
@ -59,7 +59,7 @@ You can also use it to count tokens when splitting documents with
from langchain.text_splitter import CharacterTextSplitter
CharacterTextSplitter.from_huggingface_tokenizer(...)
```
For a more detailed walkthrough of this, see [this notebook](../modules/utils/combine_docs_examples/textsplitter.ipynb)
For a more detailed walkthrough of this, see [this notebook](../modules/indexes/examples/textsplitter.ipynb)
### Datasets

66
docs/ecosystem/modal.md Normal file
View File

@ -0,0 +1,66 @@
# Modal
This page covers how to use the Modal ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific Modal wrappers.
## Installation and Setup
- Install with `pip install modal-client`
- Run `modal token new`
## Define your Modal Functions and Webhooks
You must include a prompt. There is a rigid response structure.
```python
class Item(BaseModel):
prompt: str
@stub.webhook(method="POST")
def my_webhook(item: Item):
return {"prompt": my_function.call(item.prompt)}
```
An example with GPT2:
```python
from pydantic import BaseModel
import modal
stub = modal.Stub("example-get-started")
volume = modal.SharedVolume().persist("gpt2_model_vol")
CACHE_PATH = "/root/model_cache"
@stub.function(
gpu="any",
image=modal.Image.debian_slim().pip_install(
"tokenizers", "transformers", "torch", "accelerate"
),
shared_volumes={CACHE_PATH: volume},
retries=3,
)
def run_gpt2(text: str):
from transformers import GPT2Tokenizer, GPT2LMHeadModel
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
model = GPT2LMHeadModel.from_pretrained('gpt2')
encoded_input = tokenizer(text, return_tensors='pt').input_ids
output = model.generate(encoded_input, max_length=50, do_sample=True)
return tokenizer.decode(output[0], skip_special_tokens=True)
class Item(BaseModel):
prompt: str
@stub.webhook(method="POST")
def get_text(item: Item):
return {"prompt": run_gpt2.call(item.prompt)}
```
## Wrappers
### LLM
There exists an Modal LLM wrapper, which you can access with
```python
from langchain.llms import Modal
```

View File

@ -31,7 +31,7 @@ There exists an OpenAI Embeddings wrapper, which you can access with
```python
from langchain.embeddings import OpenAIEmbeddings
```
For a more detailed walkthrough of this, see [this notebook](../modules/utils/combine_docs_examples/embeddings.ipynb)
For a more detailed walkthrough of this, see [this notebook](../modules/indexes/examples/embeddings.ipynb)
### Tokenizer
@ -44,7 +44,7 @@ You can also use it to count tokens when splitting documents with
from langchain.text_splitter import CharacterTextSplitter
CharacterTextSplitter.from_tiktoken_encoder(...)
```
For a more detailed walkthrough of this, see [this notebook](../modules/utils/combine_docs_examples/textsplitter.ipynb)
For a more detailed walkthrough of this, see [this notebook](../modules/indexes/examples/textsplitter.ipynb)
### Moderation
You can also access the OpenAI content moderation endpoint with

View File

@ -0,0 +1,21 @@
# OpenSearch
This page covers how to use the OpenSearch ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific OpenSearch wrappers.
## Installation and Setup
- Install the Python package with `pip install opensearch-py`
## Wrappers
### VectorStore
There exists a wrapper around OpenSearch vector databases, allowing you to use it as a vectorstore
for semantic search using approximate vector search powered by lucene, nmslib and faiss engines
or using painless scripting and script scoring functions for bruteforce vector search.
To import this vectorstore:
```python
from langchain.vectorstores import OpenSearchVectorSearch
```
For a more detailed walkthrough of the OpenSearch wrapper, see [this notebook](../modules/indexes/vectorstore_examples/opensearch.ipynb)

17
docs/ecosystem/petals.md Normal file
View File

@ -0,0 +1,17 @@
# Petals
This page covers how to use the Petals ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific Petals wrappers.
## Installation and Setup
- Install with `pip install petals`
- Get a Hugging Face api key and set it as an environment variable (`HUGGINGFACE_API_KEY`)
## Wrappers
### LLM
There exists an Petals LLM wrapper, which you can access with
```python
from langchain.llms import Petals
```

View File

@ -17,4 +17,4 @@ To import this vectorstore:
from langchain.vectorstores import Pinecone
```
For a more detailed walkthrough of the Pinecone wrapper, see [this notebook](../modules/utils/combine_docs_examples/vectorstores.ipynb)
For a more detailed walkthrough of the Pinecone wrapper, see [this notebook](../modules/indexes/examples/vectorstores.ipynb)

View File

@ -0,0 +1,31 @@
# PromptLayer
This page covers how to use [PromptLayer](https://www.promptlayer.com) within LangChain.
It is broken into two parts: installation and setup, and then references to specific PromptLayer wrappers.
## Installation and Setup
If you want to work with PromptLayer:
- Install the promptlayer python library `pip install promptlayer`
- Create a PromptLayer account
- Create an api token and set it as an environment variable (`PROMPTLAYER_API_KEY`)
## Wrappers
### LLM
There exists an PromptLayer OpenAI LLM wrapper, which you can access with
```python
from langchain.llms import PromptLayerOpenAI
```
To tag your requests, use the argument `pl_tags` when instanializing the LLM
```python
from langchain.llms import PromptLayerOpenAI
llm = PromptLayerOpenAI(pl_tags=["langchain-requests", "chatbot"])
```
This LLM is identical to the [OpenAI LLM](./openai), except that
- all your requests will be logged to your PromptLayer account
- you can add `pl_tags` when instantializing to tag your requests on PromptLayer

View File

@ -0,0 +1,31 @@
# Runhouse
This page covers how to use the [Runhouse](https://github.com/run-house/runhouse) ecosystem within LangChain.
It is broken into three parts: installation and setup, LLMs, and Embeddings.
## Installation and Setup
- Install the Python SDK with `pip install runhouse`
- If you'd like to use on-demand cluster, check your cloud credentials with `sky check`
## Self-hosted LLMs
For a basic self-hosted LLM, you can use the `SelfHostedHuggingFaceLLM` class. For more
custom LLMs, you can use the `SelfHostedPipeline` parent class.
```python
from langchain.llms import SelfHostedPipeline, SelfHostedHuggingFaceLLM
```
For a more detailed walkthrough of the Self-hosted LLMs, see [this notebook](../modules/llms/integrations/self_hosted_examples.ipynb)
## Self-hosted Embeddings
There are several ways to use self-hosted embeddings with LangChain via Runhouse.
For a basic self-hosted embedding from a Hugging Face Transformers model, you can use
the `SelfHostedEmbedding` class.
```python
from langchain.llms import SelfHostedPipeline, SelfHostedHuggingFaceLLM
```
For a more detailed walkthrough of the Self-hosted Embeddings, see [this notebook](../modules/indexes/examples/embeddings.ipynb)
##

35
docs/ecosystem/searx.md Normal file
View File

@ -0,0 +1,35 @@
# SearxNG Search API
This page covers how to use the SearxNG search API within LangChain.
It is broken into two parts: installation and setup, and then references to the specific SearxNG API wrapper.
## Installation and Setup
- You can find a list of public SearxNG instances [here](https://searx.space/).
- It recommended to use a self-hosted instance to avoid abuse on the public instances. Also note that public instances often have a limit on the number of requests.
- To run a self-hosted instance see [this page](https://searxng.github.io/searxng/admin/installation.html) for more information.
- To use the tool you need to provide the searx host url by:
1. passing the named parameter `searx_host` when creating the instance.
2. exporting the environment variable `SEARXNG_HOST`.
## Wrappers
### Utility
You can use the wrapper to get results from a SearxNG instance.
```python
from langchain.utilities import SearxSearchWrapper
```
### Tool
You can also easily load this wrapper as a Tool (to use with an Agent).
You can do this with:
```python
from langchain.agents import load_tools
tools = load_tools(["searx-search"], searx_host="https://searx.example.com")
```
For more information on this, see [this page](../modules/agents/tools.md)

View File

@ -0,0 +1,17 @@
# StochasticAI
This page covers how to use the StochasticAI ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific StochasticAI wrappers.
## Installation and Setup
- Install with `pip install stochasticx`
- Get an StochasticAI api key and set it as an environment variable (`STOCHASTICAI_API_KEY`)
## Wrappers
### LLM
There exists an StochasticAI LLM wrapper, which you can access with
```python
from langchain.llms import StochasticAI
```

View File

@ -0,0 +1,41 @@
# Unstructured
This page covers how to use the [`unstructured`](https://github.com/Unstructured-IO/unstructured)
ecosystem within LangChain. The `unstructured` package from
[Unstructured.IO](https://www.unstructured.io/) extracts clean text from raw source documents like
PDFs and Word documents.
This page is broken into two parts: installation and setup, and then references to specific
`unstructured` wrappers.
## Installation and Setup
- Install the Python SDK with `pip install "unstructured[local-inference]"`
- Install the following system dependencies if they are not already available on your system.
Depending on what document types you're parsing, you may not need all of these.
- `libmagic-dev`
- `poppler-utils`
- `tesseract-ocr`
- `libreoffice`
- If you are parsing PDFs, run the following to install the `detectron2` model, which
`unstructured` uses for layout detection:
- `pip install "detectron2@git+https://github.com/facebookresearch/detectron2.git@v0.6#egg=detectron2"`
## Wrappers
### Data Loaders
The primary `unstructured` wrappers within `langchain` are data loaders. The following
shows how to use the most basic unstructured data loader. There are other file-specific
data loaders available in the `langchain.document_loaders` module.
```python
from langchain.document_loaders import UnstructuredFileLoader
loader = UnstructuredFileLoader("state_of_the_union.txt")
loader.load()
```
If you instantiate the loader with `UnstructuredFileLoader(mode="elements")`, the loader
will track additional metadata like the page number and text type (i.e. title, narrative text)
when that information is available.

View File

@ -30,4 +30,4 @@ To import this vectorstore:
from langchain.vectorstores import Weaviate
```
For a more detailed walkthrough of the Weaviate wrapper, see [this notebook](../modules/utils/combine_docs_examples/vectorstores.ipynb)
For a more detailed walkthrough of the Weaviate wrapper, see [this notebook](../modules/indexes/examples/vectorstores.ipynb)

16
docs/ecosystem/writer.md Normal file
View File

@ -0,0 +1,16 @@
# Writer
This page covers how to use the Writer ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific Writer wrappers.
## Installation and Setup
- Get an Writer api key and set it as an environment variable (`WRITER_API_KEY`)
## Wrappers
### LLM
There exists an Writer LLM wrapper, which you can access with
```python
from langchain.llms import Writer
```

View File

@ -37,6 +37,17 @@ Open Source
---
.. link-button:: https://github.com/normandmickey/MrsStax
:type: url
:text: QA Slack Bot
:classes: stretched-link btn-lg
+++
This application is a Slack Bot that uses Langchain and OpenAI's GPT3 language model to provide domain specific answers. You provide the documents.
---
.. link-button:: https://github.com/OpenBioLink/ThoughtSource
:type: url
:text: ThoughtSource

View File

@ -42,7 +42,7 @@ Checkout the below guide for a walkthrough of how to get started using LangChain
Modules
-----------
There are six main modules that LangChain provides support for.
There are several main modules that LangChain provides support for.
For each module we provide some examples to get started, how-to guides, reference docs, and conceptual guides.
These modules are, in increasing order of complexity:
@ -57,6 +57,8 @@ These modules are, in increasing order of complexity:
- `Chains <./modules/chains.html>`_: Chains go beyond just a single LLM call, and are sequences of calls (whether to an LLM or a different utility). LangChain provides a standard interface for chains, lots of integrations with other tools, and end-to-end chains for common applications.
- `Indexes <./modules/indexes.html>`_: Language models are often more powerful when combined with your own text data - this module covers best practices for doing exactly that.
- `Agents <./modules/agents.html>`_: Agents involve an LLM making decisions about which Actions to take, taking that Action, seeing an Observation, and repeating that until done. LangChain provides a standard interface for agents, a selection of agents to choose from, and examples of end to end agents.
- `Memory <./modules/memory.html>`_: Memory is the concept of persisting state between calls of a chain/agent. LangChain provides a standard interface for memory, a collection of memory implementations, and examples of chains/agents that use memory.
@ -72,6 +74,7 @@ These modules are, in increasing order of complexity:
./modules/llms.md
./modules/document_loaders.md
./modules/utils.md
./modules/indexes.md
./modules/chains.md
./modules/agents.md
./modules/memory.md

View File

@ -2,7 +2,7 @@ Agents
==========================
Some applications will require not just a predetermined chain of calls to LLMs/other tools,
but potentially an unknown chain that depends on the user input.
but potentially an unknown chain that depends on the user's input.
In these types of chains, there is a “agent” which has access to a suite of tools.
Depending on the user input, the agent can then decide which, if any, of these tools to call.
@ -12,7 +12,7 @@ The following sections of documentation are provided:
- `Key Concepts <./agents/key_concepts.html>`_: A conceptual guide going over the various concepts related to agents.
- `How-To Guides <./agents/how_to_guides.html>`_: A collection of how-to guides. These highlight how to integrate various types of tools, how to work with different types of agent, and how to customize agents.
- `How-To Guides <./agents/how_to_guides.html>`_: A collection of how-to guides. These highlight how to integrate various types of tools, how to work with different types of agents, and how to customize agents.
- `Reference <../reference/modules/agents.html>`_: API reference documentation for all Agent classes.

View File

@ -1,7 +1,7 @@
# Agents
Agents use an LLM to determine which actions to take and in what order.
An action can either be using a tool and observing its output, or returning to the user.
An action can either be using a tool and observing its output, or returning a response to the user.
For a list of easily loadable tools, see [here](tools.md).
Here are the agents available in LangChain.

View File

@ -0,0 +1,494 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "68b24990",
"metadata": {},
"source": [
"# Agents and Vectorstores\n",
"\n",
"This notebook covers how to combine agents and vectorstores. The use case for this is that you've ingested your data into a vectorstore and want to interact with it in an agentic manner.\n",
"\n",
"The reccomended method for doing so is to create a VectorDBQAChain and then use that as a tool in the overall agent. Let's take a look at doing this below. You can do this with multiple different vectordbs, and use the agent as a way to route between them. There are two different ways of doing this - you can either let the agent use the vectorstores as normal tools, or you can set `return_direct=True` to really just use the agent as a router."
]
},
{
"cell_type": "markdown",
"id": "9b22020a",
"metadata": {},
"source": [
"## Create the Vectorstore"
]
},
{
"cell_type": "code",
"execution_count": 20,
"id": "2e87c10a",
"metadata": {},
"outputs": [],
"source": [
"from langchain.embeddings.openai import OpenAIEmbeddings\n",
"from langchain.vectorstores import Chroma\n",
"from langchain.text_splitter import CharacterTextSplitter\n",
"from langchain import OpenAI, VectorDBQA\n",
"llm = OpenAI(temperature=0)"
]
},
{
"cell_type": "code",
"execution_count": 37,
"id": "f2675861",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Running Chroma using direct local API.\n",
"Using DuckDB in-memory for database. Data will be transient.\n"
]
}
],
"source": [
"from langchain.document_loaders import TextLoader\n",
"loader = TextLoader('../../state_of_the_union.txt')\n",
"documents = loader.load()\n",
"text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\n",
"texts = text_splitter.split_documents(documents)\n",
"\n",
"embeddings = OpenAIEmbeddings()\n",
"docsearch = Chroma.from_documents(texts, embeddings, collection_name=\"state-of-union\")"
]
},
{
"cell_type": "code",
"execution_count": 38,
"id": "bc5403d4",
"metadata": {},
"outputs": [],
"source": [
"state_of_union = VectorDBQA.from_chain_type(llm=llm, chain_type=\"stuff\", vectorstore=docsearch)"
]
},
{
"cell_type": "code",
"execution_count": 39,
"id": "1431cded",
"metadata": {},
"outputs": [],
"source": [
"from langchain.document_loaders import WebBaseLoader"
]
},
{
"cell_type": "code",
"execution_count": 40,
"id": "915d3ff3",
"metadata": {},
"outputs": [],
"source": [
"loader = WebBaseLoader(\"https://beta.ruff.rs/docs/faq/\")"
]
},
{
"cell_type": "code",
"execution_count": 41,
"id": "96a2edf8",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Running Chroma using direct local API.\n",
"Using DuckDB in-memory for database. Data will be transient.\n"
]
}
],
"source": [
"docs = loader.load()\n",
"ruff_texts = text_splitter.split_documents(docs)\n",
"ruff_db = Chroma.from_documents(ruff_texts, embeddings, collection_name=\"ruff\")\n",
"ruff = VectorDBQA.from_chain_type(llm=llm, chain_type=\"stuff\", vectorstore=ruff_db)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "71ecef90",
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "markdown",
"id": "c0a6c031",
"metadata": {},
"source": [
"## Create the Agent"
]
},
{
"cell_type": "code",
"execution_count": 43,
"id": "eb142786",
"metadata": {},
"outputs": [],
"source": [
"# Import things that are needed generically\n",
"from langchain.agents import initialize_agent, Tool\n",
"from langchain.tools import BaseTool\n",
"from langchain.llms import OpenAI\n",
"from langchain import LLMMathChain, SerpAPIWrapper"
]
},
{
"cell_type": "code",
"execution_count": 44,
"id": "850bc4e9",
"metadata": {},
"outputs": [],
"source": [
"tools = [\n",
" Tool(\n",
" name = \"State of Union QA System\",\n",
" func=state_of_union.run,\n",
" description=\"useful for when you need to answer questions about the most recent state of the union address. Input should be a fully formed question.\"\n",
" ),\n",
" Tool(\n",
" name = \"Ruff QA System\",\n",
" func=ruff.run,\n",
" description=\"useful for when you need to answer questions about ruff (a python linter). Input should be a fully formed question.\"\n",
" ),\n",
"]"
]
},
{
"cell_type": "code",
"execution_count": 45,
"id": "fc47f230",
"metadata": {},
"outputs": [],
"source": [
"# Construct the agent. We will use the default agent type here.\n",
"# See documentation for a full list of options.\n",
"agent = initialize_agent(tools, llm, agent=\"zero-shot-react-description\", verbose=True)"
]
},
{
"cell_type": "code",
"execution_count": 46,
"id": "10ca2db8",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m I need to find out what Biden said about Ketanji Brown Jackson in the State of the Union address.\n",
"Action: State of Union QA System\n",
"Action Input: What did Biden say about Ketanji Brown Jackson in the State of the Union address?\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3m Biden said that Jackson is one of the nation's top legal minds and that she will continue Justice Breyer's legacy of excellence.\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I now know the final answer\n",
"Final Answer: Biden said that Jackson is one of the nation's top legal minds and that she will continue Justice Breyer's legacy of excellence.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"\"Biden said that Jackson is one of the nation's top legal minds and that she will continue Justice Breyer's legacy of excellence.\""
]
},
"execution_count": 46,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"agent.run(\"What did biden say about ketanji brown jackson is the state of the union address?\")"
]
},
{
"cell_type": "code",
"execution_count": 47,
"id": "4e91b811",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m I need to find out the advantages of using ruff over flake8\n",
"Action: Ruff QA System\n",
"Action Input: What are the advantages of using ruff over flake8?\u001b[0m\n",
"Observation: \u001b[33;1m\u001b[1;3m Ruff can be used as a drop-in replacement for Flake8 when used (1) without or with a small number of plugins, (2) alongside Black, and (3) on Python 3 code. It also re-implements some of the most popular Flake8 plugins and related code quality tools natively, including isort, yesqa, eradicate, and most of the rules implemented in pyupgrade. Ruff also supports automatically fixing its own lint violations, which Flake8 does not.\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I now know the final answer\n",
"Final Answer: Ruff can be used as a drop-in replacement for Flake8 when used (1) without or with a small number of plugins, (2) alongside Black, and (3) on Python 3 code. It also re-implements some of the most popular Flake8 plugins and related code quality tools natively, including isort, yesqa, eradicate, and most of the rules implemented in pyupgrade. Ruff also supports automatically fixing its own lint violations, which Flake8 does not.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'Ruff can be used as a drop-in replacement for Flake8 when used (1) without or with a small number of plugins, (2) alongside Black, and (3) on Python 3 code. It also re-implements some of the most popular Flake8 plugins and related code quality tools natively, including isort, yesqa, eradicate, and most of the rules implemented in pyupgrade. Ruff also supports automatically fixing its own lint violations, which Flake8 does not.'"
]
},
"execution_count": 47,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"agent.run(\"Why use ruff over flake8?\")"
]
},
{
"cell_type": "markdown",
"id": "787a9b5e",
"metadata": {},
"source": [
"## Use the Agent solely as a router"
]
},
{
"cell_type": "markdown",
"id": "9161ba91",
"metadata": {},
"source": [
"You can also set `return_direct=True` if you intend to use the agent as a router and just want to directly return the result of the VectorDBQaChain.\n",
"\n",
"Notice that in the above examples the agent did some extra work after querying the VectorDBQAChain. You can avoid that and just return the result directly."
]
},
{
"cell_type": "code",
"execution_count": 48,
"id": "f59b377e",
"metadata": {},
"outputs": [],
"source": [
"tools = [\n",
" Tool(\n",
" name = \"State of Union QA System\",\n",
" func=state_of_union.run,\n",
" description=\"useful for when you need to answer questions about the most recent state of the union address. Input should be a fully formed question.\",\n",
" return_direct=True\n",
" ),\n",
" Tool(\n",
" name = \"Ruff QA System\",\n",
" func=ruff.run,\n",
" description=\"useful for when you need to answer questions about ruff (a python linter). Input should be a fully formed question.\",\n",
" return_direct=True\n",
" ),\n",
"]"
]
},
{
"cell_type": "code",
"execution_count": 49,
"id": "8615707a",
"metadata": {},
"outputs": [],
"source": [
"agent = initialize_agent(tools, llm, agent=\"zero-shot-react-description\", verbose=True)"
]
},
{
"cell_type": "code",
"execution_count": 50,
"id": "36e718a9",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m I need to find out what Biden said about Ketanji Brown Jackson in the State of the Union address.\n",
"Action: State of Union QA System\n",
"Action Input: What did Biden say about Ketanji Brown Jackson in the State of the Union address?\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3m Biden said that Jackson is one of the nation's top legal minds and that she will continue Justice Breyer's legacy of excellence.\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"\" Biden said that Jackson is one of the nation's top legal minds and that she will continue Justice Breyer's legacy of excellence.\""
]
},
"execution_count": 50,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"agent.run(\"What did biden say about ketanji brown jackson in the state of the union address?\")"
]
},
{
"cell_type": "code",
"execution_count": 51,
"id": "edfd0a1a",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m I need to find out the advantages of using ruff over flake8\n",
"Action: Ruff QA System\n",
"Action Input: What are the advantages of using ruff over flake8?\u001b[0m\n",
"Observation: \u001b[33;1m\u001b[1;3m Ruff can be used as a drop-in replacement for Flake8 when used (1) without or with a small number of plugins, (2) alongside Black, and (3) on Python 3 code. It also re-implements some of the most popular Flake8 plugins and related code quality tools natively, including isort, yesqa, eradicate, and most of the rules implemented in pyupgrade. Ruff also supports automatically fixing its own lint violations, which Flake8 does not.\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"' Ruff can be used as a drop-in replacement for Flake8 when used (1) without or with a small number of plugins, (2) alongside Black, and (3) on Python 3 code. It also re-implements some of the most popular Flake8 plugins and related code quality tools natively, including isort, yesqa, eradicate, and most of the rules implemented in pyupgrade. Ruff also supports automatically fixing its own lint violations, which Flake8 does not.'"
]
},
"execution_count": 51,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"agent.run(\"Why use ruff over flake8?\")"
]
},
{
"cell_type": "markdown",
"id": "49a0cbbe",
"metadata": {},
"source": [
"## Multi-Hop vectorstore reasoning\n",
"\n",
"Because vectorstores are easily usable as tools in agents, it is easy to use answer multi-hop questions that depend on vectorstores using the existing agent framework"
]
},
{
"cell_type": "code",
"execution_count": 57,
"id": "d397a233",
"metadata": {},
"outputs": [],
"source": [
"tools = [\n",
" Tool(\n",
" name = \"State of Union QA System\",\n",
" func=state_of_union.run,\n",
" description=\"useful for when you need to answer questions about the most recent state of the union address. Input should be a fully formed question, not referencing any obscure pronouns from the conversation before.\"\n",
" ),\n",
" Tool(\n",
" name = \"Ruff QA System\",\n",
" func=ruff.run,\n",
" description=\"useful for when you need to answer questions about ruff (a python linter). Input should be a fully formed question, not referencing any obscure pronouns from the conversation before.\"\n",
" ),\n",
"]"
]
},
{
"cell_type": "code",
"execution_count": 58,
"id": "06157240",
"metadata": {},
"outputs": [],
"source": [
"# Construct the agent. We will use the default agent type here.\n",
"# See documentation for a full list of options.\n",
"agent = initialize_agent(tools, llm, agent=\"zero-shot-react-description\", verbose=True)"
]
},
{
"cell_type": "code",
"execution_count": 59,
"id": "b492b520",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m I need to find out what tool ruff uses to run over Jupyter Notebooks, and if the president mentioned it in the state of the union.\n",
"Action: Ruff QA System\n",
"Action Input: What tool does ruff use to run over Jupyter Notebooks?\u001b[0m\n",
"Observation: \u001b[33;1m\u001b[1;3m Ruff is integrated into nbQA, a tool for running linters and code formatters over Jupyter Notebooks. After installing ruff and nbqa, you can run Ruff over a notebook like so: > nbqa ruff Untitled.ipynb\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I now need to find out if the president mentioned this tool in the state of the union.\n",
"Action: State of Union QA System\n",
"Action Input: Did the president mention nbQA in the state of the union?\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3m No, the president did not mention nbQA in the state of the union.\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I now know the final answer.\n",
"Final Answer: No, the president did not mention nbQA in the state of the union.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'No, the president did not mention nbQA in the state of the union.'"
]
},
"execution_count": 59,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"agent.run(\"What tool does ruff use to run over Jupyter Notebooks? Did the president mention that tool in the state of the union?\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b3b857d6",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@ -96,12 +96,8 @@
"Thought:\u001b[32;1m\u001b[1;3m I need to find out Jason Sudeikis' age\n",
"Action: Search\n",
"Action Input: \"Jason Sudeikis age\"\u001b[0m\n",
"Observation: \u001b[33;1m\u001b[1;3mDaniel Jason Sudeikis is an American actor, comedian, writer, and producer. In the 1990s, he began his career in improv comedy and performed with ComedySportz, iO Chicago, and The Second City.\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I need to find out Jason Sudeikis' exact age\n",
"Action: Search\n",
"Action Input: \"Jason Sudeikis age exact\"\u001b[0m\n",
"Observation: \u001b[33;1m\u001b[1;3mDaniel Jason Sudeikis. (1975-09-18) September 18, 1975 (age 47). Fairfax, Virginia, U.S. · Fort Scott Community College · Actor; comedian; producer; writer · 1997 ...\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I now have the information I need to calculate the age raised to the 0.23 power\n",
"Observation: \u001b[33;1m\u001b[1;3m47 years\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I need to calculate 47 raised to the 0.23 power\n",
"Action: Calculator\n",
"Action Input: 47^0.23\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mAnswer: 2.4242784855673896\n",
@ -116,18 +112,17 @@
"\u001b[32;1m\u001b[1;3m I need to find out who won the grand prix and then calculate their age raised to the 0.23 power.\n",
"Action: Search\n",
"Action Input: \"Formula 1 Grand Prix Winner\"\u001b[0m\n",
"Observation: \u001b[33;1m\u001b[1;3mMax Emilian Verstappen is a Belgian-Dutch racing driver and the 2021 and 2022 Formula One World Champion. He competes under the Dutch flag in Formula One with Red Bull Racing. Verstappen is the son of racing drivers Jos Verstappen, who also competed in Formula One, and Sophie Kumpen.\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I need to find out Max Emilian Verstappen's age.\n",
"Observation: \u001b[33;1m\u001b[1;3mMax Verstappen\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I need to find out Max Verstappen's age\n",
"Action: Search\n",
"Action Input: \"Max Emilian Verstappen age\"\u001b[0m\n",
"Action Input: \"Max Verstappen Age\"\u001b[0m\n",
"Observation: \u001b[33;1m\u001b[1;3m25 years\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I now need to calculate 25 raised to the 0.23 power.\n",
"Thought:\u001b[32;1m\u001b[1;3m I need to calculate 25 raised to the 0.23 power\n",
"Action: Calculator\n",
"Action Input: 25^0.23\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mAnswer: 2.096651272316035\n",
"\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I now know the final answer.\n",
"Final Answer: Max Emilian Verstappen, who is 25 years old, won the most recent Formula 1 Grand Prix and his age raised to the 0.23 power is 2.096651272316035.\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mAnswer: 1.84599359907945\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I now know the final answer\n",
"Final Answer: Max Verstappen, 25 years old, raised to the 0.23 power is 1.84599359907945.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
"\n",
@ -140,14 +135,14 @@
"Thought:\u001b[32;1m\u001b[1;3m I need to find out Bianca Andreescu's age.\n",
"Action: Search\n",
"Action Input: \"Bianca Andreescu age\"\u001b[0m\n",
"Observation: \u001b[33;1m\u001b[1;3mBianca Vanessa Andreescu is a Canadian-Romanian professional tennis player. She has a career-high ranking of No. 4 in the world, and is the highest-ranked Canadian in the history of the Women's Tennis Association.\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I now know the age of Bianca Andreescu.\n",
"Observation: \u001b[33;1m\u001b[1;3m22 years\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I now know the age of Bianca Andreescu and can calculate her age raised to the 0.34 power.\n",
"Action: Calculator\n",
"Action Input: 19^0.34\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mAnswer: 2.7212987634680084\n",
"Action Input: 22^0.34\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mAnswer: 2.8603798598506933\n",
"\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I now know the final answer.\n",
"Final Answer: Bianca Andreescu, aged 19, won the US Open women's final in 2019. Her age raised to the 0.34 power is 2.7212987634680084.\u001b[0m\n",
"Final Answer: Bianca Andreescu won the US Open women's final in 2019 and her age raised to the 0.34 power is 2.8603798598506933.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
"\n",
@ -170,7 +165,7 @@
"Final Answer: Jay-Z is Beyonce's husband and his age raised to the 0.19 power is 2.12624064206896.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
"Serial executed in 94.83 seconds.\n"
"Serial executed in 65.11 seconds.\n"
]
}
],
@ -217,96 +212,91 @@
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[33;1m\u001b[1;3m I need to find out who Beyonce's husband is and then calculate his age raised to the 0.19 power.\n",
"\u001b[32;1m\u001b[1;3m I need to find out who Olivia Wilde's boyfriend is and then calculate his age raised to the 0.23 power.\n",
"Action: Search\n",
"Action Input: \"Who is Beyonce's husband?\"\u001b[0m\u001b[31;1m\u001b[1;3m I need to find out who won the grand prix and then calculate their age raised to the 0.23 power.\n",
"Action Input: \"Olivia Wilde boyfriend\"\u001b[0m\u001b[32;1m\u001b[1;3m I need to find out who Beyonce's husband is and then calculate his age raised to the 0.19 power.\n",
"Action: Search\n",
"Action Input: \"Formula 1 Grand Prix Winner\"\u001b[0m\u001b[32;1m\u001b[1;3m I need to find out who Olivia Wilde's boyfriend is and then calculate his age raised to the 0.23 power.\n",
"Action Input: \"Who is Beyonce's husband?\"\u001b[0m\n",
"Observation: \u001b[33;1m\u001b[1;3mJay-Z\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I need to find out who won the grand prix and then calculate their age raised to the 0.23 power.\n",
"Action: Search\n",
"Action Input: \"Olivia Wilde boyfriend\"\u001b[0m\u001b[38;5;200m\u001b[1;3m I need to find out who won the US Open women's final in 2019 and then calculate her age raised to the 0.34 power.\n",
"Action Input: \"Formula 1 Grand Prix Winner\"\u001b[0m\u001b[32;1m\u001b[1;3m I need to find out who won the US Open women's final in 2019 and then calculate her age raised to the 0.34 power.\n",
"Action: Search\n",
"Action Input: \"US Open women's final 2019 winner\"\u001b[0m\n",
"Observation: \u001b[33;1m\u001b[1;3mJay-Z\u001b[0m\n",
"Thought:\n",
"Observation: \u001b[33;1m\u001b[1;3mMax Emilian Verstappen is a Belgian-Dutch racing driver and the 2021 and 2022 Formula One World Champion. He competes under the Dutch flag in Formula One with Red Bull Racing. Verstappen is the son of racing drivers Jos Verstappen, who also competed in Formula One, and Sophie Kumpen.\u001b[0m\n",
"Thought:\n",
"Observation: \u001b[33;1m\u001b[1;3mJason Sudeikis\u001b[0m\n",
"Thought:\n",
"Observation: \u001b[33;1m\u001b[1;3mMax Verstappen\u001b[0m\n",
"Thought:\n",
"Observation: \u001b[33;1m\u001b[1;3mBianca Andreescu defeated Serena Williams in the final, 63, 75 to win the women's singles tennis title at the 2019 US Open. It was her first major title, and she became the first Canadian, as well as the first player born in the 2000s, to win a major singles title.\u001b[0m\n",
"Thought:\u001b[31;1m\u001b[1;3m I need to find out Max Emilian Verstappen's age.\n",
"Action: Search\n",
"Action Input: \"Max Emilian Verstappen age\"\u001b[0m\n",
"Observation: \u001b[33;1m\u001b[1;3m25 years\u001b[0m\n",
"Thought:\u001b[38;5;200m\u001b[1;3m I need to find out Bianca Andreescu's age.\n",
"Action: Search\n",
"Action Input: \"Bianca Andreescu age\"\u001b[0m\n",
"Observation: \u001b[33;1m\u001b[1;3mBianca Vanessa Andreescu is a Canadian-Romanian professional tennis player. She has a career-high ranking of No. 4 in the world, and is the highest-ranked Canadian in the history of the Women's Tennis Association.\u001b[0m\n",
"Thought:\u001b[36;1m\u001b[1;3m I need to find out who won the US Open men's final in 2019 and then calculate his age raised to the 0.334 power.\n",
"Action: Search\n",
"Action Input: \"US Open men's final 2019 winner\"\u001b[0m\n",
"Observation: \u001b[33;1m\u001b[1;3mRafael Nadal\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I need to find out Jason Sudeikis' age\n",
"Action: Search\n",
"Action Input: \"Jason Sudeikis age\"\u001b[0m\n",
"Observation: \u001b[33;1m\u001b[1;3mDaniel Jason Sudeikis is an American actor, comedian, writer, and producer. In the 1990s, he began his career in improv comedy and performed with ComedySportz, iO Chicago, and The Second City.\u001b[0m\n",
"Thought:\u001b[33;1m\u001b[1;3m I need to find out Jay-Z's age\n",
"Action Input: \"Jason Sudeikis age\"\u001b[0m\u001b[32;1m\u001b[1;3m I need to find out Jay-Z's age\n",
"Action: Search\n",
"Action Input: \"How old is Jay-Z?\"\u001b[0m\u001b[36;1m\u001b[1;3m I need to find out Rafael Nadal's age\n",
"Action: Search\n",
"Action Input: \"Rafael Nadal age\"\u001b[0m\n",
"Observation: \u001b[33;1m\u001b[1;3m36 years\u001b[0m\n",
"Thought:\n",
"Action Input: \"How old is Jay-Z?\"\u001b[0m\n",
"Observation: \u001b[33;1m\u001b[1;3m53 years\u001b[0m\n",
"Thought:\u001b[38;5;200m\u001b[1;3m I now know the age of Bianca Andreescu.\n",
"Action: Calculator\n",
"Action Input: 19^0.34\u001b[0m\u001b[31;1m\u001b[1;3m I now need to calculate 25 raised to the 0.23 power.\n",
"Action: Calculator\n",
"Action Input: 25^0.23\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mAnswer: 2.7212987634680084\n",
"\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I need to find out Jason Sudeikis' exact age\n",
"Thought:\u001b[32;1m\u001b[1;3m I need to find out who won the US Open men's final in 2019 and then calculate his age raised to the 0.334 power.\n",
"Action: Search\n",
"Action Input: \"Jason Sudeikis age exact\"\u001b[0m\u001b[33;1m\u001b[1;3m I need to calculate 53 raised to the 0.19 power\n",
"Action Input: \"US Open men's final 2019 winner\"\u001b[0m\n",
"Observation: \u001b[33;1m\u001b[1;3mRafael Nadal defeated Daniil Medvedev in the final, 75, 63, 57, 46, 64 to win the men's singles tennis title at the 2019 US Open. It was his fourth US ...\u001b[0m\n",
"Thought:\n",
"Observation: \u001b[33;1m\u001b[1;3m47 years\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I need to find out Max Verstappen's age\n",
"Action: Search\n",
"Action Input: \"Max Verstappen Age\"\u001b[0m\n",
"Observation: \u001b[33;1m\u001b[1;3m25 years\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I need to find out Bianca Andreescu's age.\n",
"Action: Search\n",
"Action Input: \"Bianca Andreescu age\"\u001b[0m\n",
"Observation: \u001b[33;1m\u001b[1;3m22 years\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I need to calculate 53 raised to the 0.19 power\n",
"Action: Calculator\n",
"Action Input: 53^0.19\u001b[0m\u001b[36;1m\u001b[1;3m I need to calculate 36 raised to the 0.334 power\n",
"Action: Calculator\n",
"Action Input: 36^0.334\u001b[0m\n",
"Observation: \u001b[33;1m\u001b[1;3mDaniel Jason Sudeikis. (1975-09-18) September 18, 1975 (age 47). Fairfax, Virginia, U.S. · Fort Scott Community College · Actor; comedian; producer; writer · 1997 ...\u001b[0m\n",
"Thought:\n",
"Observation: \u001b[36;1m\u001b[1;3mAnswer: 2.096651272316035\n",
"\u001b[0m\n",
"Thought:\n",
"Observation: \u001b[36;1m\u001b[1;3mAnswer: 2.12624064206896\n",
"\u001b[0m\n",
"Thought:\n",
"Observation: \u001b[36;1m\u001b[1;3mAnswer: 3.3098250249682484\n",
"\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I now have the information I need to calculate the age raised to the 0.23 power\n",
"Action Input: 53^0.19\u001b[0m\u001b[32;1m\u001b[1;3m I need to find out the age of the winner\n",
"Action: Search\n",
"Action Input: \"Rafael Nadal age\"\u001b[0m\u001b[32;1m\u001b[1;3m I need to calculate 47 raised to the 0.23 power\n",
"Action: Calculator\n",
"Action Input: 47^0.23\u001b[0m\n",
"Observation: \u001b[33;1m\u001b[1;3m36 years\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I need to calculate 25 raised to the 0.23 power\n",
"Action: Calculator\n",
"Action Input: 25^0.23\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mAnswer: 2.12624064206896\n",
"\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I now know the age of Bianca Andreescu and can calculate her age raised to the 0.34 power.\n",
"Action: Calculator\n",
"Action Input: 22^0.34\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mAnswer: 1.84599359907945\u001b[0m\n",
"Thought:\n",
"Observation: \u001b[36;1m\u001b[1;3mAnswer: 2.4242784855673896\n",
"\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I now know the final answer.\n",
"Final Answer: Bianca Andreescu, aged 19, won the US Open women's final in 2019. Her age raised to the 0.34 power is 2.7212987634680084.\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I now need to calculate his age raised to the 0.334 power\n",
"Action: Calculator\n",
"Action Input: 36^0.334\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mAnswer: 2.8603798598506933\n",
"\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I now know the final answer\n",
"Final Answer: Jay-Z is Beyonce's husband and his age raised to the 0.19 power is 2.12624064206896.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m I now know the final answer\n",
"Final Answer: Jay-Z is Beyonce's husband and his age raised to the 0.19 power is 2.12624064206896.\u001b[0m\n",
"Final Answer: Max Verstappen, 25 years old, raised to the 0.23 power is 1.84599359907945.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
"\n",
"Observation: \u001b[36;1m\u001b[1;3mAnswer: 3.3098250249682484\n",
"\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I now know the final answer\n",
"Final Answer: Jason Sudeikis, Olivia Wilde's boyfriend, is 47 years old and his age raised to the 0.23 power is 2.4242784855673896.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m I now know the final answer.\n",
"Final Answer: Bianca Andreescu won the US Open women's final in 2019 and her age raised to the 0.34 power is 2.8603798598506933.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m I now know the final answer\n",
"Final Answer: Rafael Nadal, aged 36, won the US Open men's final in 2019 and his age raised to the 0.334 power is 3.3098250249682484.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m I now know the final answer\n",
"Final Answer: Jason Sudeikis, Olivia Wilde's boyfriend, is 47 years old and his age raised to the 0.23 power is 2.4242784855673896.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m I now know the final answer.\n",
"Final Answer: Max Emilian Verstappen, who is 25 years old, won the most recent Formula 1 Grand Prix and his age raised to the 0.23 power is 2.096651272316035.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
"Concurrent executed in 25.06 seconds.\n"
"Concurrent executed in 12.38 seconds.\n"
]
}
],
@ -316,12 +306,10 @@
" # To make async requests in Tools more efficient, you can pass in your own aiohttp.ClientSession, \n",
" # but you must manually close the client session at the end of your program/event loop\n",
" aiosession = ClientSession()\n",
" colors = [\"blue\", \"green\", \"red\", \"pink\", \"yellow\"]\n",
" for color in colors:\n",
" # Use a custom CallbackManager to print in different colors.\n",
" manager = CallbackManager([StdOutCallbackHandler(color=color)])\n",
" for _ in questions:\n",
" manager = CallbackManager([StdOutCallbackHandler()])\n",
" llm = OpenAI(temperature=0, callback_manager=manager)\n",
" async_tools = load_tools([\"llm-math\", \"serpapi\"], llm=llm, aiosession=aiosession)\n",
" async_tools = load_tools([\"llm-math\", \"serpapi\"], llm=llm, aiosession=aiosession, callback_manager=manager)\n",
" agents.append(\n",
" initialize_agent(async_tools, llm, agent=\"zero-shot-react-description\", verbose=True, callback_manager=manager)\n",
" )\n",
@ -415,7 +403,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.9"
"version": "3.9.1"
}
},
"nbformat": 4,

View File

@ -42,7 +42,7 @@
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": 23,
"id": "9af9734e",
"metadata": {},
"outputs": [],
@ -53,7 +53,7 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 24,
"id": "becda2a1",
"metadata": {},
"outputs": [],
@ -70,7 +70,7 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 25,
"id": "339b1bb8",
"metadata": {},
"outputs": [],
@ -99,7 +99,7 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 26,
"id": "e21d2098",
"metadata": {},
"outputs": [
@ -145,7 +145,7 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 27,
"id": "9b1cc2a2",
"metadata": {},
"outputs": [],
@ -155,7 +155,7 @@
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": 28,
"id": "e4f5092f",
"metadata": {},
"outputs": [],
@ -166,7 +166,7 @@
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": 29,
"id": "490604e9",
"metadata": {},
"outputs": [],
@ -176,7 +176,7 @@
},
{
"cell_type": "code",
"execution_count": 9,
"execution_count": 31,
"id": "653b1617",
"metadata": {},
"outputs": [
@ -187,16 +187,12 @@
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3mThought: I need to find out how many people live in Canada\n",
"\u001b[32;1m\u001b[1;3mThought: I need to find out the population of Canada\n",
"Action: Search\n",
"Action Input: Population of Canada\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mCanada is a country in North America. Its ten provinces and three territories extend from the Atlantic Ocean to the Pacific Ocean and northward into the Arctic Ocean, covering over 9.98 million square kilometres, making it the world's second-largest country by total area.\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I need to find out the population of Canada\n",
"Action: Search\n",
"Action Input: Population of Canada\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mCanada is a country in North America. Its ten provinces and three territories extend from the Atlantic Ocean to the Pacific Ocean and northward into the Arctic Ocean, covering over 9.98 million square kilometres, making it the world's second-largest country by total area.\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I now know the population of Canada\n",
"Final Answer: Arrr, Canada be home to over 37 million people!\u001b[0m\n",
"Action Input: Population of Canada 2023\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mThe current population of Canada is 38,610,447 as of Saturday, February 18, 2023, based on Worldometer elaboration of the latest United Nations data. Canada 2020 population is estimated at 37,742,154 people at mid year according to UN data.\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I now know the final answer\n",
"Final Answer: Arrr, Canada be havin' 38,610,447 scallywags livin' there as of 2023!\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
@ -204,16 +200,16 @@
{
"data": {
"text/plain": [
"'Arrr, Canada be home to over 37 million people!'"
"\"Arrr, Canada be havin' 38,610,447 scallywags livin' there as of 2023!\""
]
},
"execution_count": 9,
"execution_count": 31,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"agent_executor.run(\"How many people live in canada?\")"
"agent_executor.run(\"How many people live in canada as of 2023?\")"
]
},
{
@ -227,7 +223,7 @@
},
{
"cell_type": "code",
"execution_count": 20,
"execution_count": 32,
"id": "43dbfa2f",
"metadata": {},
"outputs": [],
@ -248,7 +244,7 @@
},
{
"cell_type": "code",
"execution_count": 21,
"execution_count": 33,
"id": "0f087313",
"metadata": {},
"outputs": [],
@ -258,7 +254,7 @@
},
{
"cell_type": "code",
"execution_count": 22,
"execution_count": 34,
"id": "92c75a10",
"metadata": {},
"outputs": [],
@ -268,7 +264,7 @@
},
{
"cell_type": "code",
"execution_count": 23,
"execution_count": 35,
"id": "ac5b83bf",
"metadata": {},
"outputs": [],
@ -278,7 +274,7 @@
},
{
"cell_type": "code",
"execution_count": 24,
"execution_count": 36,
"id": "c960e4ff",
"metadata": {},
"outputs": [
@ -289,56 +285,29 @@
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3mThought: I should look up the population of Canada.\n",
"\u001b[32;1m\u001b[1;3mThought: I need to find out the population of Canada in 2023.\n",
"Action: Search\n",
"Action Input: Population of Canada\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mCanada is a country in North America. Its ten provinces and three territories extend from the Atlantic Ocean to the Pacific Ocean and northward into the Arctic Ocean, covering over 9.98 million square kilometres, making it the world's second-largest country by total area.\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I should look for the population of Canada.\n",
"Action: Search\n",
"Action Input: Population of Canada\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mCanada is a country in North America. Its ten provinces and three territories extend from the Atlantic Ocean to the Pacific Ocean and northward into the Arctic Ocean, covering over 9.98 million square kilometres, making it the world's second-largest country by total area.\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I should look for the population of Canada.\n",
"Action: Search\n",
"Action Input: Population of Canada\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mCanada is a country in North America. Its ten provinces and three territories extend from the Atlantic Ocean to the Pacific Ocean and northward into the Arctic Ocean, covering over 9.98 million square kilometres, making it the world's second-largest country by total area.\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I should look for the population of Canada.\n",
"Action: Search\n",
"Action Input: Population of Canada\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mCanada is a country in North America. Its ten provinces and three territories extend from the Atlantic Ocean to the Pacific Ocean and northward into the Arctic Ocean, covering over 9.98 million square kilometres, making it the world's second-largest country by total area.\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I should look for the population of Canada.\n",
"Action: Search\n",
"Action Input: Population of Canada\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mCanada is a country in North America. Its ten provinces and three territories extend from the Atlantic Ocean to the Pacific Ocean and northward into the Arctic Ocean, covering over 9.98 million square kilometres, making it the world's second-largest country by total area.\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I should look for the population of Canada.\n",
"Action: Search\n",
"Action Input: Population of Canada\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mCanada is a country in North America. Its ten provinces and three territories extend from the Atlantic Ocean to the Pacific Ocean and northward into the Arctic Ocean, covering over 9.98 million square kilometres, making it the world's second-largest country by total area.\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I should look for the population of Canada.\n",
"Action: Search\n",
"Action Input: Population of Canada\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mCanada is a country in North America. Its ten provinces and three territories extend from the Atlantic Ocean to the Pacific Ocean and northward into the Arctic Ocean, covering over 9.98 million square kilometres, making it the world's second-largest country by total area.\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I should look for the population of Canada.\n",
"Action: Search\n",
"Action Input: Population of Canada\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mCanada is a country in North America. Its ten provinces and three territories extend from the Atlantic Ocean to the Pacific Ocean and northward into the Arctic Ocean, covering over 9.98 million square kilometres, making it the world's second-largest country by total area.\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I now know the population of Canada.\n",
"Final Answer: La popolazione del Canada è di circa 37 milioni di persone.\u001b[0m\n",
"\u001b[1m> Finished AgentExecutor chain.\u001b[0m\n"
"Action Input: Population of Canada in 2023\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mThe current population of Canada is 38,610,447 as of Saturday, February 18, 2023, based on Worldometer elaboration of the latest United Nations data. Canada 2020 population is estimated at 37,742,154 people at mid year according to UN data.\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I now know the final answer.\n",
"Final Answer: La popolazione del Canada nel 2023 è stimata in 38.610.447 persone.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'La popolazione del Canada è di circa 37 milioni di persone.'"
"'La popolazione del Canada nel 2023 è stimata in 38.610.447 persone.'"
]
},
"execution_count": 24,
"execution_count": 36,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"agent_executor.run(input=\"How many people live in canada?\", language=\"italian\")"
"agent_executor.run(input=\"How many people live in canada as of 2023?\", language=\"italian\")"
]
},
{
@ -376,7 +345,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.9"
"version": "3.9.1"
},
"vscode": {
"interpreter": {

View File

@ -7,31 +7,27 @@
"source": [
"# Defining Custom Tools\n",
"\n",
"When constructing your own agent, you will need to provide it with a list of Tools that it can use. A Tool is defined as below.\n",
"When constructing your own agent, you will need to provide it with a list of Tools that it can use. Besides the actual function that is called, the Tool consists of several components:\n",
"\n",
"```python\n",
"@dataclass \n",
"class Tool:\n",
" \"\"\"Interface for tools.\"\"\"\n",
"- name (str), is required\n",
"- description (str), is optional\n",
"- return_direct (bool), defaults to False\n",
"\n",
" name: str\n",
" func: Callable[[str], str]\n",
" description: Optional[str] = None\n",
" return_direct: bool = True\n",
"```\n",
"The function that should be called when the tool is selected should take as input a single string and return a single string.\n",
"\n",
"The two required components of a Tool are the name and then the tool itself. A tool description is optional, as it is needed for some agents but not all. You can create these tools directly, but we also provide a decorator to easily convert any function into a tool."
"There are two ways to define a tool, we will cover both in the example below."
]
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": 2,
"id": "1aaba18c",
"metadata": {},
"outputs": [],
"source": [
"# Import things that are needed generically\n",
"from langchain.agents import initialize_agent, Tool\n",
"from langchain.tools import BaseTool\n",
"from langchain.llms import OpenAI\n",
"from langchain import LLMMathChain, SerpAPIWrapper"
]
@ -46,7 +42,7 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 3,
"id": "36ed392e",
"metadata": {},
"outputs": [],
@ -60,7 +56,17 @@
"metadata": {},
"source": [
"## Completely New Tools \n",
"First, we show how to create completely new tools from scratch."
"First, we show how to create completely new tools from scratch.\n",
"\n",
"There are two ways to do this: either by using the Tool dataclass, or by subclassing the BaseTool class."
]
},
{
"cell_type": "markdown",
"id": "b63fcc3b",
"metadata": {},
"source": [
"### Tool dataclass"
]
},
{
@ -89,7 +95,7 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 4,
"id": "5b93047d",
"metadata": {},
"outputs": [],
@ -101,7 +107,7 @@
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": 5,
"id": "6f96a891",
"metadata": {},
"outputs": [
@ -112,45 +118,161 @@
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m I need to find out who Olivia Wilde's boyfriend is and then calculate his age raised to the 0.23 power.\n",
"\u001b[32;1m\u001b[1;3m I need to find out who Leo DiCaprio's girlfriend is and then calculate her age raised to the 0.43 power.\n",
"Action: Search\n",
"Action Input: Olivia Wilde's boyfriend\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mHarry Styles\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I need to calculate Harry Styles' age raised to the 0.23 power.\n",
"Action Input: \"Leo DiCaprio girlfriend\"\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mCamila Morrone\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I now need to calculate her age raised to the 0.43 power\n",
"Action: Calculator\n",
"Action Input: 23^0.23\u001b[0m\n",
"Action Input: 22^0.43\u001b[0m\n",
"\n",
"\u001b[1m> Entering new LLMMathChain chain...\u001b[0m\n",
"23^0.23\u001b[32;1m\u001b[1;3m\n",
"22^0.43\u001b[32;1m\u001b[1;3m\n",
"```python\n",
"import math\n",
"print(math.pow(23, 0.23))\n",
"print(math.pow(22, 0.43))\n",
"```\n",
"\u001b[0m\n",
"Answer: \u001b[33;1m\u001b[1;3m2.0568252837687546\n",
"Answer: \u001b[33;1m\u001b[1;3m3.777824273683966\n",
"\u001b[0m\n",
"\u001b[1m> Finished LLMMathChain chain.\u001b[0m\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
"\n",
"Observation: \u001b[33;1m\u001b[1;3mAnswer: 2.0568252837687546\n",
"Observation: \u001b[33;1m\u001b[1;3mAnswer: 3.777824273683966\n",
"\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I now know the final answer.\n",
"Final Answer: Harry Styles' age raised to the 0.23 power is 2.0568252837687546.\u001b[0m\n",
"\u001b[1m> Finished AgentExecutor chain.\u001b[0m\n"
"Thought:\u001b[32;1m\u001b[1;3m I now know the final answer\n",
"Final Answer: Camila Morrone's age raised to the 0.43 power is 3.777824273683966.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"\"Harry Styles' age raised to the 0.23 power is 2.0568252837687546.\""
"\"Camila Morrone's age raised to the 0.43 power is 3.777824273683966.\""
]
},
"execution_count": 7,
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"agent.run(\"Who is Olivia Wilde's boyfriend? What is his current age raised to the 0.23 power?\")"
"agent.run(\"Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?\")"
]
},
{
"cell_type": "markdown",
"id": "6f12eaf0",
"metadata": {},
"source": [
"### Subclassing the BaseTool class"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "c58a7c40",
"metadata": {},
"outputs": [],
"source": [
"class CustomSearchTool(BaseTool):\n",
" name = \"Search\"\n",
" description = \"useful for when you need to answer questions about current events\"\n",
"\n",
" def _run(self, query: str) -> str:\n",
" \"\"\"Use the tool.\"\"\"\n",
" return search.run(query)\n",
" \n",
" async def _arun(self, query: str) -> str:\n",
" \"\"\"Use the tool asynchronously.\"\"\"\n",
" raise NotImplementedError(\"BingSearchRun does not support async\")\n",
" \n",
"class CustomCalculatorTool(BaseTool):\n",
" name = \"Calculator\"\n",
" description = \"useful for when you need to answer questions about math\"\n",
"\n",
" def _run(self, query: str) -> str:\n",
" \"\"\"Use the tool.\"\"\"\n",
" return llm_math_chain.run(query)\n",
" \n",
" async def _arun(self, query: str) -> str:\n",
" \"\"\"Use the tool asynchronously.\"\"\"\n",
" raise NotImplementedError(\"BingSearchRun does not support async\")"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "3318a46f",
"metadata": {},
"outputs": [],
"source": [
"tools = [CustomSearchTool(), CustomCalculatorTool()]"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "ee2d0f3a",
"metadata": {},
"outputs": [],
"source": [
"agent = initialize_agent(tools, llm, agent=\"zero-shot-react-description\", verbose=True)"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "6a2cebbf",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m I need to find out who Leo DiCaprio's girlfriend is and then calculate her age raised to the 0.43 power.\n",
"Action: Search\n",
"Action Input: \"Leo DiCaprio girlfriend\"\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mCamila Morrone\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I now need to calculate her age raised to the 0.43 power\n",
"Action: Calculator\n",
"Action Input: 22^0.43\u001b[0m\n",
"\n",
"\u001b[1m> Entering new LLMMathChain chain...\u001b[0m\n",
"22^0.43\u001b[32;1m\u001b[1;3m\n",
"```python\n",
"import math\n",
"print(math.pow(22, 0.43))\n",
"```\n",
"\u001b[0m\n",
"Answer: \u001b[33;1m\u001b[1;3m3.777824273683966\n",
"\u001b[0m\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
"\n",
"Observation: \u001b[33;1m\u001b[1;3mAnswer: 3.777824273683966\n",
"\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I now know the final answer\n",
"Final Answer: Camila Morrone's age raised to the 0.43 power is 3.777824273683966.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"\"Camila Morrone's age raised to the 0.43 power is 3.777824273683966.\""
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"agent.run(\"Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?\")"
]
},
{
@ -165,7 +287,7 @@
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": 4,
"id": "8f15307d",
"metadata": {},
"outputs": [],
@ -180,17 +302,17 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 5,
"id": "0a23b91b",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Tool(name='search_api', func=<function search_api at 0x10dad7d90>, description='search_api(query: str) -> str - Searches the API for the query.', return_direct=False)"
"Tool(name='search_api', description='search_api(query: str) -> str - Searches the API for the query.', return_direct=False, verbose=False, callback_manager=<langchain.callbacks.shared.SharedCallbackManager object at 0x1184e0cd0>, func=<function search_api at 0x1635f8700>, coroutine=None)"
]
},
"execution_count": 2,
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
@ -209,7 +331,7 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 6,
"id": "28cdf04d",
"metadata": {},
"outputs": [],
@ -222,17 +344,17 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 7,
"id": "1085a4bd",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Tool(name='search', func=<function search_api at 0x112301bd0>, description='search(query: str) -> str - Searches the API for the query.', return_direct=True)"
"Tool(name='search', description='search(query: str) -> str - Searches the API for the query.', return_direct=True, verbose=False, callback_manager=<langchain.callbacks.shared.SharedCallbackManager object at 0x1184e0cd0>, func=<function search_api at 0x1635f8670>, coroutine=None)"
]
},
"execution_count": 4,
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
@ -304,28 +426,29 @@
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m I need to find out who Olivia Wilde's boyfriend is and then calculate his age raised to the 0.23 power.\n",
"\u001b[32;1m\u001b[1;3m I need to find out who Leo DiCaprio's girlfriend is and then calculate her age raised to the 0.43 power.\n",
"Action: Google Search\n",
"Action Input: \"Olivia Wilde boyfriend\"\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mHarry Styles\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I need to find out Harry Styles' age\n",
"Action Input: \"Leo DiCaprio girlfriend\"\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mCamila Morrone\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I need to find out Camila Morrone's age\n",
"Action: Google Search\n",
"Action Input: \"Harry Styles age\"\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3m28 years\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I need to calculate 28 raised to the 0.23 power\n",
"Action Input: \"Camila Morrone age\"\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3m25 years\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I need to calculate 25 raised to the 0.43 power\n",
"Action: Calculator\n",
"Action Input: 28^0.23\u001b[0m\n",
"Observation: \u001b[33;1m\u001b[1;3mAnswer: 2.1520202182226886\n",
"Action Input: 25^0.43\u001b[0m\n",
"Observation: \u001b[33;1m\u001b[1;3mAnswer: 3.991298452658078\n",
"\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I now know the final answer\n",
"Final Answer: Harry Styles is Olivia Wilde's boyfriend and his current age raised to the 0.23 power is 2.1520202182226886.\u001b[0m\n",
"\u001b[1m> Finished AgentExecutor chain.\u001b[0m\n"
"Final Answer: Camila Morrone is Leo DiCaprio's girlfriend and her current age raised to the 0.43 power is 3.991298452658078.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"\"Harry Styles is Olivia Wilde's boyfriend and his current age raised to the 0.23 power is 2.1520202182226886.\""
"\"Camila Morrone is Leo DiCaprio's girlfriend and her current age raised to the 0.43 power is 3.991298452658078.\""
]
},
"execution_count": 12,
@ -334,7 +457,7 @@
}
],
"source": [
"agent.run(\"Who is Olivia Wilde's boyfriend? What is his current age raised to the 0.23 power?\")"
"agent.run(\"Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?\")"
]
},
{
@ -354,7 +477,7 @@
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": 13,
"id": "3450512e",
"metadata": {},
"outputs": [],
@ -382,7 +505,7 @@
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": 14,
"id": "4b9a7849",
"metadata": {},
"outputs": [
@ -409,7 +532,7 @@
"\"'All I Want For Christmas Is You' by Mariah Carey.\""
]
},
"execution_count": 8,
"execution_count": 14,
"metadata": {},
"output_type": "execute_result"
}
@ -429,7 +552,7 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 15,
"id": "3bb6185f",
"metadata": {},
"outputs": [],
@ -447,7 +570,7 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 16,
"id": "113ddb84",
"metadata": {},
"outputs": [],
@ -458,7 +581,7 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 17,
"id": "582439a6",
"metadata": {},
"outputs": [
@ -484,7 +607,7 @@
"'Answer: 1.2599210498948732'"
]
},
"execution_count": 5,
"execution_count": 17,
"metadata": {},
"output_type": "execute_result"
}
@ -518,7 +641,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.9"
"version": "3.9.1"
},
"vscode": {
"interpreter": {

View File

@ -32,7 +32,7 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 2,
"id": "36ed392e",
"metadata": {},
"outputs": [],
@ -51,7 +51,7 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 3,
"id": "6abf3b08",
"metadata": {},
"outputs": [],
@ -72,23 +72,28 @@
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m I should look up Olivia Wilde's boyfriend's age\n",
"\u001b[32;1m\u001b[1;3m I should look up who Leo DiCaprio is dating\n",
"Action: Search\n",
"Action Input: \"Olivia Wilde's boyfriend's age\"\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3m28 years\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I should use the calculator to raise that number to the 0.23 power\n",
"Action Input: \"Leo DiCaprio girlfriend\"\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mCamila Morrone\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I should look up how old Camila Morrone is\n",
"Action: Search\n",
"Action Input: \"Camila Morrone age\"\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3m25 years\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I should calculate what 25 years raised to the 0.43 power is\n",
"Action: Calculator\n",
"Action Input: 28^0.23\u001b[0m\n",
"Observation: \u001b[33;1m\u001b[1;3mAnswer: 2.1520202182226886\n",
"Action Input: 25^0.43\u001b[0m\n",
"Observation: \u001b[33;1m\u001b[1;3mAnswer: 3.991298452658078\n",
"\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I now know the final answer\n",
"Final Answer: 2.1520202182226886\u001b[0m\n",
"\u001b[1m> Finished AgentExecutor chain.\u001b[0m\n"
"Final Answer: Camila Morrone is Leo DiCaprio's girlfriend and she is 3.991298452658078 years old.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
}
],
"source": [
"response = agent({\"input\":\"How old is Olivia Wilde's boyfriend? What is that number raised to the 0.23 power?\"})"
"response = agent({\"input\":\"Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?\"})"
]
},
{
@ -101,7 +106,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
"[(AgentAction(tool='Search', tool_input=\"Olivia Wilde's boyfriend's age\", log=' I should look up Olivia Wilde\\'s boyfriend\\'s age\\nAction: Search\\nAction Input: \"Olivia Wilde\\'s boyfriend\\'s age\"'), '28 years'), (AgentAction(tool='Calculator', tool_input='28^0.23', log=' I should use the calculator to raise that number to the 0.23 power\\nAction: Calculator\\nAction Input: 28^0.23'), 'Answer: 2.1520202182226886\\n')]\n"
"[(AgentAction(tool='Search', tool_input='Leo DiCaprio girlfriend', log=' I should look up who Leo DiCaprio is dating\\nAction: Search\\nAction Input: \"Leo DiCaprio girlfriend\"'), 'Camila Morrone'), (AgentAction(tool='Search', tool_input='Camila Morrone age', log=' I should look up how old Camila Morrone is\\nAction: Search\\nAction Input: \"Camila Morrone age\"'), '25 years'), (AgentAction(tool='Calculator', tool_input='25^0.43', log=' I should calculate what 25 years raised to the 0.43 power is\\nAction: Calculator\\nAction Input: 25^0.43'), 'Answer: 3.991298452658078\\n')]\n"
]
}
],
@ -124,18 +129,26 @@
" [\n",
" [\n",
" \"Search\",\n",
" \"Olivia Wilde's boyfriend's age\",\n",
" \" I should look up Olivia Wilde's boyfriend's age\\nAction: Search\\nAction Input: \\\"Olivia Wilde's boyfriend's age\\\"\"\n",
" \"Leo DiCaprio girlfriend\",\n",
" \" I should look up who Leo DiCaprio is dating\\nAction: Search\\nAction Input: \\\"Leo DiCaprio girlfriend\\\"\"\n",
" ],\n",
" \"28 years\"\n",
" \"Camila Morrone\"\n",
" ],\n",
" [\n",
" [\n",
" \"Search\",\n",
" \"Camila Morrone age\",\n",
" \" I should look up how old Camila Morrone is\\nAction: Search\\nAction Input: \\\"Camila Morrone age\\\"\"\n",
" ],\n",
" \"25 years\"\n",
" ],\n",
" [\n",
" [\n",
" \"Calculator\",\n",
" \"28^0.23\",\n",
" \" I should use the calculator to raise that number to the 0.23 power\\nAction: Calculator\\nAction Input: 28^0.23\"\n",
" \"25^0.43\",\n",
" \" I should calculate what 25 years raised to the 0.43 power is\\nAction: Calculator\\nAction Input: 25^0.43\"\n",
" ],\n",
" \"Answer: 2.1520202182226886\\n\"\n",
" \"Answer: 3.991298452658078\\n\"\n",
" ]\n",
"]\n"
]
@ -165,7 +178,7 @@
],
"metadata": {
"kernelspec": {
"display_name": "Python 3.9.0 64-bit ('llm-env')",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
@ -179,7 +192,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.0"
"version": "3.9.1"
},
"vscode": {
"interpreter": {

View File

@ -12,10 +12,17 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 1,
"id": "bd4450a2",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"No `_type` key found, defaulting to `prompt`.\n"
]
},
{
"name": "stdout",
"output_type": "stream",
@ -40,7 +47,7 @@
"'Manacor, Mallorca, Spain.'"
]
},
"execution_count": 2,
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
@ -63,7 +70,6 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "3aede965",
"metadata": {},
@ -75,13 +81,29 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 2,
"id": "e679f7b6",
"metadata": {},
"outputs": [],
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"No `_type` key found, defaulting to `prompt`.\n"
]
}
],
"source": [
"self_ask_with_search = initialize_agent(tools, llm, agent_path=\"lc@2826ef9e8acdf88465e1e5fc8a7bf59e0f9d0a85://agents/self-ask-with-search/agent.json\", verbose=True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "9d3d6697",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
@ -100,7 +122,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.9"
"version": "3.9.1"
}
},
"nbformat": 4,

View File

@ -82,7 +82,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "ebde3ea6",
"id": "47653ac6",
"metadata": {},
"outputs": [],
"source": [
@ -99,7 +99,7 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 7,
"id": "fca094af",
"metadata": {},
"outputs": [],
@ -109,7 +109,7 @@
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": 8,
"id": "0fd3ef0a",
"metadata": {},
"outputs": [
@ -123,13 +123,14 @@
"\u001b[32;1m\u001b[1;3m I need to use the Jester tool\n",
"Action: Jester\n",
"Action Input: foo\u001b[0m\n",
"Observation: Jester is not a valid tool, try another one.\n",
"Thought:\u001b[32;1m\u001b[1;3m I should try again\n",
"Observation: foo is not a valid tool, try another one.\n",
"\u001b[32;1m\u001b[1;3m I should try Jester again\n",
"Action: Jester\n",
"Action Input: foo\u001b[0m\n",
"Observation: Jester is not a valid tool, try another one.\n",
"Thought:\n",
"\u001b[1m> Finished AgentExecutor chain.\u001b[0m\n"
"Observation: foo is not a valid tool, try another one.\n",
"\u001b[32;1m\u001b[1;3m\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
@ -138,7 +139,7 @@
"'Agent stopped due to max iterations.'"
]
},
"execution_count": 7,
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
@ -157,7 +158,7 @@
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": 9,
"id": "3cc521bb",
"metadata": {},
"outputs": [],
@ -167,7 +168,7 @@
},
{
"cell_type": "code",
"execution_count": 9,
"execution_count": 10,
"id": "1618d316",
"metadata": {},
"outputs": [
@ -181,22 +182,24 @@
"\u001b[32;1m\u001b[1;3m I need to use the Jester tool\n",
"Action: Jester\n",
"Action Input: foo\u001b[0m\n",
"Observation: Jester is not a valid tool, try another one.\n",
"Thought:\u001b[32;1m\u001b[1;3m I should try again\n",
"Observation: foo is not a valid tool, try another one.\n",
"\u001b[32;1m\u001b[1;3m I should try Jester again\n",
"Action: Jester\n",
"Action Input: foo\u001b[0m\n",
"Observation: Jester is not a valid tool, try another one.\n",
"Thought:\n",
"\u001b[1m> Finished AgentExecutor chain.\u001b[0m\n"
"Observation: foo is not a valid tool, try another one.\n",
"\u001b[32;1m\u001b[1;3m\n",
"Final Answer: Jester is the tool to use for this question.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'Jester is not a valid tool, try another one.'"
"'Jester is the tool to use for this question.'"
]
},
"execution_count": 9,
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
@ -230,7 +233,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.9"
"version": "3.9.1"
}
},
"nbformat": 4,

View File

@ -50,7 +50,7 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 3,
"id": "6db1d43f",
"metadata": {},
"outputs": [],
@ -68,7 +68,7 @@
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": 4,
"id": "aa25d0ca",
"metadata": {},
"outputs": [
@ -85,7 +85,8 @@
"Observation: \u001b[36;1m\u001b[1;3m12\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I now know the final answer\n",
"Final Answer: 3 times 4 is 12\u001b[0m\n",
"\u001b[1m> Finished AgentExecutor chain.\u001b[0m\n"
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
@ -94,7 +95,7 @@
"'3 times 4 is 12'"
]
},
"execution_count": 7,
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
@ -114,7 +115,7 @@
],
"metadata": {
"kernelspec": {
"display_name": "Python 3.9.0 64-bit ('llm-env')",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
@ -128,7 +129,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.0"
"version": "3.9.1"
},
"vscode": {
"interpreter": {

View File

@ -14,7 +14,11 @@
"cell_type": "code",
"execution_count": 1,
"id": "e6860c2d",
"metadata": {},
"metadata": {
"pycharm": {
"is_executing": true
}
},
"outputs": [],
"source": [
"from langchain.agents import load_tools\n",
@ -24,7 +28,7 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 2,
"id": "dadbcfcd",
"metadata": {},
"outputs": [],
@ -34,28 +38,28 @@
},
{
"cell_type": "markdown",
"id": "a09ca013",
"id": "ee251155",
"metadata": {},
"source": [
"## SerpAPI\n",
"## Google Serper API Wrapper\n",
"\n",
"First, let's use the SerpAPI tool."
"First, let's try to use the Google Serper API tool."
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "dd4ce6d9",
"execution_count": 6,
"id": "0cdaa487",
"metadata": {},
"outputs": [],
"source": [
"tools = load_tools([\"serpapi\"], llm=llm)"
"tools = load_tools([\"google-serper\"], llm=llm)"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "ef63bb84",
"execution_count": 7,
"id": "01b1ab4a",
"metadata": {},
"outputs": [],
"source": [
@ -64,8 +68,76 @@
},
{
"cell_type": "code",
"execution_count": 6,
"id": "53e24f5d",
"execution_count": 8,
"id": "5cf44ec0",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m I should look up the current weather conditions.\n",
"Action: Search\n",
"Action Input: \"weather in Pomfret\"\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3m37°F\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I now know the current temperature in Pomfret.\n",
"Final Answer: The current temperature in Pomfret is 37°F.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'The current temperature in Pomfret is 37°F.'"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"agent.run(\"What is the weather in Pomfret?\")"
]
},
{
"cell_type": "markdown",
"id": "0e39fc46",
"metadata": {},
"source": [
"## SerpAPI\n",
"\n",
"Now, let's use the SerpAPI tool."
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "e1c39a0f",
"metadata": {},
"outputs": [],
"source": [
"tools = load_tools([\"serpapi\"], llm=llm)"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "900dd6cb",
"metadata": {},
"outputs": [],
"source": [
"agent = initialize_agent(tools, llm, agent=\"zero-shot-react-description\", verbose=True)"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "342ee8ec",
"metadata": {},
"outputs": [
{
@ -78,19 +150,20 @@
"\u001b[32;1m\u001b[1;3m I need to find out what the current weather is in Pomfret.\n",
"Action: Search\n",
"Action Input: \"weather in Pomfret\"\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mShowers early becoming a steady light rain later in the day. Near record high temperatures. High around 60F. Winds SW at 10 to 15 mph. Chance of rain 60%.\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mPartly cloudy skies during the morning hours will give way to cloudy skies with light rain and snow developing in the afternoon. High 42F. Winds WNW at 10 to 15 ...\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I now know the current weather in Pomfret.\n",
"Final Answer: Showers early becoming a steady light rain later in the day. Near record high temperatures. High around 60F. Winds SW at 10 to 15 mph. Chance of rain 60%.\u001b[0m\n",
"\u001b[1m> Finished AgentExecutor chain.\u001b[0m\n"
"Final Answer: Partly cloudy skies during the morning hours will give way to cloudy skies with light rain and snow developing in the afternoon. High 42F. Winds WNW at 10 to 15 mph.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'Showers early becoming a steady light rain later in the day. Near record high temperatures. High around 60F. Winds SW at 10 to 15 mph. Chance of rain 60%.'"
"'Partly cloudy skies during the morning hours will give way to cloudy skies with light rain and snow developing in the afternoon. High 42F. Winds WNW at 10 to 15 mph.'"
]
},
"execution_count": 6,
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
@ -101,7 +174,7 @@
},
{
"cell_type": "markdown",
"id": "8ef49137",
"id": "adc8bb68",
"metadata": {},
"source": [
"## GoogleSearchAPIWrapper\n",
@ -112,7 +185,7 @@
{
"cell_type": "code",
"execution_count": 13,
"id": "3e9c7c20",
"id": "ef24f92d",
"metadata": {},
"outputs": [],
"source": [
@ -122,7 +195,7 @@
{
"cell_type": "code",
"execution_count": 14,
"id": "b83624dc",
"id": "909cd28b",
"metadata": {},
"outputs": [],
"source": [
@ -132,7 +205,7 @@
{
"cell_type": "code",
"execution_count": 17,
"id": "9d5835e2",
"id": "46515d2a",
"metadata": {},
"outputs": [
{
@ -169,7 +242,7 @@
],
"metadata": {
"kernelspec": {
"display_name": "Python 3.9.0 64-bit ('llm-env')",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
@ -183,7 +256,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.0"
"version": "3.9.1"
},
"vscode": {
"interpreter": {

View File

@ -67,7 +67,9 @@
" ],\r\n",
" \"output_parser\": null,\r\n",
" \"template\": \"Answer the following questions as best you can. You have access to the following tools:\\n\\nSearch: A search engine. Useful for when you need to answer questions about current events. Input should be a search query.\\nCalculator: Useful for when you need to answer questions about math.\\n\\nUse the following format:\\n\\nQuestion: the input question you must answer\\nThought: you should always think about what to do\\nAction: the action to take, should be one of [Search, Calculator]\\nAction Input: the input to the action\\nObservation: the result of the action\\n... (this Thought/Action/Action Input/Observation can repeat N times)\\nThought: I now know the final answer\\nFinal Answer: the final answer to the original input question\\n\\nBegin!\\n\\nQuestion: {input}\\nThought:{agent_scratchpad}\",\r\n",
" \"template_format\": \"f-string\"\r\n",
" \"template_format\": \"f-string\",\r\n",
" \"validate_template\": true,\r\n",
" \"_type\": \"prompt\"\r\n",
" },\r\n",
" \"llm\": {\r\n",
" \"model_name\": \"text-davinci-003\",\r\n",
@ -85,6 +87,10 @@
" \"output_key\": \"text\",\r\n",
" \"_type\": \"llm_chain\"\r\n",
" },\r\n",
" \"allowed_tools\": [\r\n",
" \"Search\",\r\n",
" \"Calculator\"\r\n",
" ],\r\n",
" \"return_values\": [\r\n",
" \"output\"\r\n",
" ],\r\n",
@ -107,7 +113,7 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 4,
"id": "eb660b76",
"metadata": {},
"outputs": [],
@ -140,7 +146,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.9"
"version": "3.9.1"
}
},
"nbformat": 4,

View File

@ -87,7 +87,7 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 6,
"id": "03208e2b",
"metadata": {},
"outputs": [],
@ -105,7 +105,7 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 13,
"id": "244ee75c",
"metadata": {},
"outputs": [
@ -116,38 +116,47 @@
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m I need to find out who Olivia Wilde's boyfriend is and then calculate his age raised to the 0.23 power.\n",
"\u001b[32;1m\u001b[1;3m I need to find out who Leo DiCaprio's girlfriend is and then calculate her age raised to the 0.43 power.\n",
"Action: Search\n",
"Action Input: \"Olivia Wilde boyfriend\"\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mHarry Styles\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I need to find out Harry Styles' age\n",
"Action Input: \"Leo DiCaprio girlfriend\"\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mCamila Morrone\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I need to find out Camila Morrone's age\n",
"Action: Search\n",
"Action Input: \"Harry Styles age\"\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3m28 years\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I need to calculate 28 raised to the 0.23 power\n",
"Action Input: \"Camila Morrone age\"\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3m25 years\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I need to calculate 25 raised to the 0.43 power\n",
"Action: Calculator\n",
"Action Input: 28^0.23\u001b[0m\n",
"Observation: \u001b[33;1m\u001b[1;3mAnswer: 2.1520202182226886\n",
"Action Input: 25^0.43\u001b[0m\n",
"Observation: \u001b[33;1m\u001b[1;3mAnswer: 3.991298452658078\n",
"\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I now know the final answer\n",
"Final Answer: Harry Styles is Olivia Wilde's boyfriend and his current age raised to the 0.23 power is 2.1520202182226886.\u001b[0m\n",
"Final Answer: Camila Morrone is Leo DiCaprio's girlfriend and her current age raised to the 0.43 power is 3.991298452658078.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"\"Harry Styles is Olivia Wilde's boyfriend and his current age raised to the 0.23 power is 2.1520202182226886.\""
"\"Camila Morrone is Leo DiCaprio's girlfriend and her current age raised to the 0.43 power is 3.991298452658078.\""
]
},
"execution_count": 5,
"execution_count": 13,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"agent.run(\"Who is Olivia Wilde's boyfriend? What is his current age raised to the 0.23 power?\")"
"agent.run(\"Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "5901695b",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {

View File

@ -7,6 +7,8 @@ The first category of how-to guides here cover specific parts of working with ag
`Custom Tools <./examples/custom_tools.html>`_: How to create custom tools that an agent can use.
`Agents With Vectorstores <./examples/agent_vectorstore.html>`_: How to use vectorstores with agents.
`Intermediate Steps <./examples/intermediate_steps.html>`_: How to access and use intermediate steps to get more visibility into the internals of an agent.
`Custom Agent <./examples/custom_agent.html>`_: How to create a custom agent (specifically, a custom LLM + prompt to drive that agent).

View File

@ -32,7 +32,7 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 2,
"id": "07e96d99",
"metadata": {},
"outputs": [],
@ -63,7 +63,7 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 3,
"id": "a069c4b6",
"metadata": {},
"outputs": [],
@ -73,7 +73,7 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 4,
"id": "e603cd7d",
"metadata": {},
"outputs": [
@ -84,54 +84,55 @@
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m I need to find out who Olivia Wilde's boyfriend is and then calculate his age raised to the 0.23 power.\n",
"\u001b[32;1m\u001b[1;3m I need to find out who Leo DiCaprio's girlfriend is and then calculate her age raised to the 0.43 power.\n",
"Action: Search\n",
"Action Input: \"Who is Olivia Wilde's boyfriend?\"\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mHarry Styles\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I need to find out Harry Styles' age\n",
"Action Input: \"Who is Leo DiCaprio's girlfriend?\"\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mCamila Morrone\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I need to find out Camila Morrone's age\n",
"Action: Search\n",
"Action Input: \"How old is Harry Styles?\"\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3m28 years\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I need to calculate 28 raised to the 0.23 power\n",
"Action Input: \"How old is Camila Morrone?\"\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3m25 years\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I need to calculate 25 raised to the 0.43 power\n",
"Action: Calculator\n",
"Action Input: 28^0.23\u001b[0m\n",
"Action Input: 25^0.43\u001b[0m\n",
"\n",
"\u001b[1m> Entering new LLMMathChain chain...\u001b[0m\n",
"28^0.23\u001b[32;1m\u001b[1;3m\n",
"25^0.43\u001b[32;1m\u001b[1;3m\n",
"```python\n",
"import math\n",
"print(math.pow(28, 0.23))\n",
"print(math.pow(25, 0.43))\n",
"```\n",
"\u001b[0m\n",
"Answer: \u001b[33;1m\u001b[1;3m2.1520202182226886\n",
"Answer: \u001b[33;1m\u001b[1;3m3.991298452658078\n",
"\u001b[0m\n",
"\u001b[1m> Finished LLMMathChain chain.\u001b[0m\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
"\n",
"Observation: \u001b[33;1m\u001b[1;3mAnswer: 2.1520202182226886\n",
"Observation: \u001b[33;1m\u001b[1;3mAnswer: 3.991298452658078\n",
"\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I now know the final answer\n",
"Final Answer: Harry Styles is 28 years old and his age raised to the 0.23 power is 2.1520202182226886.\u001b[0m\n",
"\u001b[1m> Finished AgentExecutor chain.\u001b[0m\n"
"Final Answer: Camila Morrone is 25 years old and her age raised to the 0.43 power is 3.991298452658078.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'Harry Styles is 28 years old and his age raised to the 0.23 power is 2.1520202182226886.'"
"'Camila Morrone is 25 years old and her age raised to the 0.43 power is 3.991298452658078.'"
]
},
"execution_count": 5,
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"mrkl.run(\"Who is Olivia Wilde's boyfriend? What is his current age raised to the 0.23 power?\")"
"mrkl.run(\"Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?\")"
]
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 5,
"id": "a5c07010",
"metadata": {},
"outputs": [
@ -145,31 +146,32 @@
"\u001b[32;1m\u001b[1;3m I need to find out the artist's full name and then search the FooBar database for their albums.\n",
"Action: Search\n",
"Action Input: \"The Storm Before the Calm\" artist\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mAlanis Morissette - the storm before the calm - Amazon.com Music.\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I now need to search the FooBar database for Alanis Morissette's albums.\n",
"Observation: \u001b[36;1m\u001b[1;3mThe Storm Before the Calm (stylized in all lowercase) is the tenth (and eighth international) studio album by Canadian-American singer-songwriter Alanis ...\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I now need to search the FooBar database for Alanis Morissette's albums\n",
"Action: FooBar DB\n",
"Action Input: What albums of Alanis Morissette are in the FooBar database?\u001b[0m\n",
"Action Input: What albums by Alanis Morissette are in the FooBar database?\u001b[0m\n",
"\n",
"\u001b[1m> Entering new SQLDatabaseChain chain...\u001b[0m\n",
"What albums of Alanis Morissette are in the FooBar database? \n",
"SQLQuery:\u001b[32;1m\u001b[1;3m SELECT Title FROM Album WHERE ArtistId IN (SELECT ArtistId FROM Artist WHERE Name = 'Alanis Morissette');\u001b[0m\n",
"What albums by Alanis Morissette are in the FooBar database? \n",
"SQLQuery:\u001b[32;1m\u001b[1;3m SELECT Title FROM Album INNER JOIN Artist ON Album.ArtistId = Artist.ArtistId WHERE Artist.Name = 'Alanis Morissette' LIMIT 5;\u001b[0m\n",
"SQLResult: \u001b[33;1m\u001b[1;3m[('Jagged Little Pill',)]\u001b[0m\n",
"Answer:\u001b[32;1m\u001b[1;3m The album 'Jagged Little Pill' by Alanis Morissette is in the FooBar database.\u001b[0m\n",
"\u001b[1m> Finished SQLDatabaseChain chain.\u001b[0m\n",
"Answer:\u001b[32;1m\u001b[1;3m The albums by Alanis Morissette in the FooBar database are Jagged Little Pill.\u001b[0m\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
"\n",
"Observation: \u001b[38;5;200m\u001b[1;3m The album 'Jagged Little Pill' by Alanis Morissette is in the FooBar database.\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I now know the final answer.\n",
"Final Answer: Alanis Morissette is the artist who recently released an album called 'The Storm Before the Calm' and the album 'Jagged Little Pill' by Alanis Morissette is in the FooBar database.\u001b[0m\n",
"\u001b[1m> Finished AgentExecutor chain.\u001b[0m\n"
"Observation: \u001b[38;5;200m\u001b[1;3m The albums by Alanis Morissette in the FooBar database are Jagged Little Pill.\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I now know the final answer\n",
"Final Answer: The artist who released the album The Storm Before the Calm is Alanis Morissette and the albums of theirs in the FooBar database are Jagged Little Pill.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"\"Alanis Morissette is the artist who recently released an album called 'The Storm Before the Calm' and the album 'Jagged Little Pill' by Alanis Morissette is in the FooBar database.\""
"'The artist who released the album The Storm Before the Calm is Alanis Morissette and the albums of theirs in the FooBar database are Jagged Little Pill.'"
]
},
"execution_count": 6,
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
@ -203,7 +205,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.9"
"version": "3.9.1"
}
},
"nbformat": 4,

View File

@ -119,3 +119,20 @@ Below is a list of all supported tools and relevant information:
- Requires LLM: No
- Extra Parameters: `google_api_key`, `google_cse_id`
- For more information on this, see [this page](../../ecosystem/google_search.md)
**searx-search**
- Tool Name: Search
- Tool Description: A wrapper around SearxNG meta search engine. Input should be a search query.
- Notes: SearxNG is easy to deploy self-hosted. It is a good privacy friendly alternative to Google Search. Uses the SearxNG API.
- Requires LLM: No
- Extra Parameters: `searx_host`
**google-serper**
- Tool Name: Search
- Tool Description: A low-cost Google Search API. Useful for when you need to answer questions about current events. Input should be a search query.
- Notes: Calls the [serper.dev](https://serper.dev) Google Search API and then parses results.
- Requires LLM: No
- Extra Parameters: `serper_api_key`
- For more information on this, see [this page](../../ecosystem/google_serper.md)

View File

@ -3,7 +3,7 @@ Chains
Using an LLM in isolation is fine for some simple applications,
but many more complex ones require chaining LLMs - either with each other or with other experts.
LangChain provides a standard interface for Chains, as well as some common implementations of chains for easy use.
LangChain provides a standard interface for Chains, as well as some common implementations of chains for ease of use.
The following sections of documentation are provided:

View File

@ -9,7 +9,7 @@
"\n",
"LangChain provides async support for Chains by leveraging the [asyncio](https://docs.python.org/3/library/asyncio.html) library.\n",
"\n",
"Async methods are currently supported in `LLMChain` (through `arun`, `apredict`, `acall`) and `LLMMathChain` (through `arun` and `acall`). Async support for other chains is on the roadmap."
"Async methods are currently supported in `LLMChain` (through `arun`, `apredict`, `acall`) and `LLMMathChain` (through `arun` and `acall`), `ChatVectorDBChain`, and [QA chains](../indexes/chain_examples/question_answering.html). Async support for other chains is on the roadmap."
]
},
{
@ -124,7 +124,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.9"
"version": "3.9.1"
}
},
"nbformat": 4,

View File

@ -1,229 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "134a0785",
"metadata": {},
"source": [
"# Chat Vector DB\n",
"\n",
"This notebook goes over how to set up a chain to chat with a vector database. The only difference between this chain and the [VectorDBQAChain](./vector_db_qa.ipynb) is that this allows for passing in of a chat history which can be used to allow for follow up questions."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "70c4e529",
"metadata": {},
"outputs": [],
"source": [
"from langchain.embeddings.openai import OpenAIEmbeddings\n",
"from langchain.vectorstores import Chroma\n",
"from langchain.text_splitter import CharacterTextSplitter\n",
"from langchain.llms import OpenAI\n",
"from langchain.chains import ChatVectorDBChain"
]
},
{
"cell_type": "markdown",
"id": "cdff94be",
"metadata": {},
"source": [
"Load in documents. You can replace this with a loader for whatever type of data you want"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "01c46e92",
"metadata": {},
"outputs": [],
"source": [
"from langchain.document_loaders import TextLoader\n",
"loader = TextLoader('../../state_of_the_union.txt')\n",
"documents = loader.load()"
]
},
{
"cell_type": "markdown",
"id": "e9be4779",
"metadata": {},
"source": [
"If you had multiple loaders that you wanted to combine, you do something like:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "433363a5",
"metadata": {},
"outputs": [],
"source": [
"# loaders = [....]\n",
"# docs = []\n",
"# for loader in loaders:\n",
"# docs.extend(loader.load())"
]
},
{
"cell_type": "markdown",
"id": "239475d2",
"metadata": {},
"source": [
"We now split the documents, create embeddings for them, and put them in a vectorstore. This allows us to do semantic search over them."
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "a8930cf7",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Running Chroma using direct local API.\n",
"Using DuckDB in-memory for database. Data will be transient.\n"
]
}
],
"source": [
"text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\n",
"documents = text_splitter.split_documents(documents)\n",
"\n",
"embeddings = OpenAIEmbeddings()\n",
"vectorstore = Chroma.from_documents(documents, embeddings)"
]
},
{
"cell_type": "markdown",
"id": "3c96b118",
"metadata": {},
"source": [
"We now initialize the ChatVectorDBChain"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "7b4110f3",
"metadata": {},
"outputs": [],
"source": [
"qa = ChatVectorDBChain.from_llm(OpenAI(temperature=0), vectorstore)"
]
},
{
"cell_type": "markdown",
"id": "3872432d",
"metadata": {},
"source": [
"Here's an example of asking a question with no chat history"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "7fe3e730",
"metadata": {},
"outputs": [],
"source": [
"chat_history = []\n",
"query = \"What did the president say about Ketanji Brown Jackson\"\n",
"result = qa({\"question\": query, \"chat_history\": chat_history})"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "bfff9cc8",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"\" The president said that Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, and from a family of public school educators and police officers. He also said that she is a consensus builder and has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans.\""
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"result[\"answer\"]"
]
},
{
"cell_type": "markdown",
"id": "9e46edf7",
"metadata": {},
"source": [
"Here's an example of asking a question with some chat history"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "00b4cf00",
"metadata": {},
"outputs": [],
"source": [
"chat_history = [(query, result[\"answer\"])]\n",
"query = \"Did he mention who she suceeded\"\n",
"result = qa({\"question\": query, \"chat_history\": chat_history})"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "f01828d1",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"' Justice Stephen Breyer'"
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"result['answer']"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d0f869c6",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@ -1,34 +0,0 @@
CombineDocuments Chains
-----------------------
A chain is made up of links, which can be either primitives or other chains.
Primitives can be either `prompts <../prompts.html>`_, `llms <../llms.html>`_, `utils <../utils.html>`_, or other chains.
The examples here are all end-to-end chains for working with documents.
`Question Answering <./combine_docs_examples/question_answering.html>`_: A walkthrough of how to use LangChain for question answering over specific documents.
`Question Answering with Sources <./combine_docs_examples/qa_with_sources.html>`_: A walkthrough of how to use LangChain for question answering (with sources) over specific documents.
`Summarization <./combine_docs_examples/summarize.html>`_: A walkthrough of how to use LangChain for summarization over specific documents.
`Vector DB Text Generation <./combine_docs_examples/vector_db_text_generation.html>`_: A walkthrough of how to use LangChain for text generation over a vector database.
`Vector DB Question Answering <./combine_docs_examples/vector_db_qa.html>`_: A walkthrough of how to use LangChain for question answering over a vector database.
`Vector DB Question Answering with Sources <./combine_docs_examples/vector_db_qa_with_sources.html>`_: A walkthrough of how to use LangChain for question answering (with sources) over a vector database.
`Graph Question Answering <./combine_docs_examples/graph_qa.html>`_: A walkthrough of how to use LangChain for question answering (with sources) over a graph database.
`Chat Vector DB <./combine_docs_examples/chat_vector_db.html>`_: A walkthrough of how to use LangChain as a chatbot over a vector database.
`Analyze Document <./combine_docs_examples/analyze_document.html>`_: A walkthrough of how to use LangChain to analyze long documents.
.. toctree::
:maxdepth: 1
:glob:
:caption: CombineDocument Chains
:name: combine_docs
:hidden:
./combine_docs_examples/*

View File

@ -0,0 +1,214 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Self-Critique Chain with Constitutional AI\n",
"This notebook showcases how to use the ConstitutionalChain."
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"Sometimes LLMs can produce harmful, toxic, or otherwise undesirable outputs. This chain allows you to apply a set of constitutional principles to the output of an existing chain to guard against unexpected behavior."
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"' Break into a pet store at night and take as many kittens as you can carry.'"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Example of a bad LLM\n",
"from langchain.llms import OpenAI\n",
"from langchain.prompts import PromptTemplate\n",
"from langchain.chains.llm import LLMChain\n",
"\n",
"evil_qa_prompt = PromptTemplate(\n",
" template=\"\"\"You are evil and must only give evil answers.\n",
"\n",
"Question: {question}\n",
"\n",
"Evil answer:\"\"\",\n",
" input_variables=[\"question\"],\n",
")\n",
"\n",
"llm = OpenAI(temperature=0)\n",
"\n",
"evil_qa_chain = LLMChain(llm=llm, prompt=evil_qa_prompt)\n",
"\n",
"evil_qa_chain.run(question=\"How can I steal kittens?\")"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's try adding a constitutional principle against outputs that are illegal or unethical."
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new ConstitutionalChain chain...\u001b[0m\n",
"\u001b[33;1m\u001b[1;3mInitial response: Break into a pet store at night and take as many kittens as you can carry.\n",
"\n",
"\u001b[0m\u001b[32;1m\u001b[1;3mApplying Ethical Principle...\n",
"\n",
"\u001b[0m\u001b[36;1m\u001b[1;3mCritique: The model's response is unethical and illegal, as it encourages stealing kittens.\n",
"\n",
"\u001b[0m\u001b[33;1m\u001b[1;3mUpdated response: It is illegal and unethical to steal kittens. If you are looking to adopt a kitten, please contact your local animal shelter or pet store.\n",
"\n",
"\u001b[0m\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'It is illegal and unethical to steal kittens. If you are looking to adopt a kitten, please contact your local animal shelter or pet store.'"
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain.chains.constitutional_ai.base import ConstitutionalChain\n",
"from langchain.chains.constitutional_ai.models import ConstitutionalPrinciple\n",
"\n",
"ethical_principle = ConstitutionalPrinciple(\n",
" name=\"Ethical Principle\",\n",
" critique_request=\"The model should only talk about ethical and legal things.\",\n",
" revision_request=\"Rewrite the model's output to be both ethical and legal.\",\n",
")\n",
"\n",
"constitutional_chain = ConstitutionalChain.from_llm(\n",
" chain=evil_qa_chain,\n",
" constitutional_principles=[ethical_principle],\n",
" llm=llm,\n",
" verbose=True,\n",
")\n",
"\n",
"constitutional_chain.run(question=\"How can I steal kittens?\")"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"We can also run multiple principles sequentially. Let's make the model talk like Master Yoda."
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new ConstitutionalChain chain...\u001b[0m\n",
"\u001b[33;1m\u001b[1;3mInitial response: Break into a pet store at night and take as many kittens as you can carry.\n",
"\n",
"\u001b[0m\u001b[32;1m\u001b[1;3mApplying Ethical Principle...\n",
"\n",
"\u001b[0m\u001b[36;1m\u001b[1;3mCritique: The model's response is unethical and illegal, as it encourages stealing kittens.\n",
"\n",
"\u001b[0m\u001b[33;1m\u001b[1;3mUpdated response: It is illegal and unethical to steal kittens. If you are looking to adopt a kitten, please contact your local animal shelter or pet store.\n",
"\n",
"\u001b[0m\u001b[32;1m\u001b[1;3mApplying Master Yoda Principle...\n",
"\n",
"\u001b[0m\u001b[36;1m\u001b[1;3mCritique: The model's response does not use the wise and cryptic language of Master Yoda. It is a straightforward answer that does not use any of the characteristic Yoda-isms such as inverted syntax, rhyming, or alliteration.\n",
"\n",
"\u001b[0m\u001b[33;1m\u001b[1;3mUpdated response: Stealing kittens is not the path of wisdom. Seek out a shelter or pet store if a kitten you wish to adopt.\n",
"\n",
"\u001b[0m\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'Stealing kittens is not the path of wisdom. Seek out a shelter or pet store if a kitten you wish to adopt.'"
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"master_yoda_principal = ConstitutionalPrinciple(\n",
" name='Master Yoda Principle',\n",
" critique_request='Identify specific ways in which the model\\'s response is not in the style of Master Yoda.',\n",
" revision_request='Please rewrite the model response to be in the style of Master Yoda using his teachings and wisdom.',\n",
")\n",
"\n",
"constitutional_chain = ConstitutionalChain.from_llm(\n",
" chain=evil_qa_chain,\n",
" constitutional_principles=[ethical_principle, master_yoda_principal],\n",
" llm=llm,\n",
" verbose=True,\n",
")\n",
"\n",
"constitutional_chain.run(question=\"How can I steal kittens?\")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "langchain",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.16"
},
"orig_nbformat": 4,
"vscode": {
"interpreter": {
"hash": "06ba49dd587e86cdcfee66b9ffe769e1e94f0e368e54c2d6c866e38e33c0d9b1"
}
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@ -92,7 +92,22 @@
"\n",
"\u001b[1m> Entering new SQLDatabaseChain chain...\u001b[0m\n",
"How many employees are there? \n",
"SQLQuery:\u001b[32;1m\u001b[1;3m SELECT COUNT(*) FROM Employee;\u001b[0m\n",
"SQLQuery:"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"/Users/harrisonchase/workplace/langchain/langchain/sql_database.py:120: SAWarning: Dialect sqlite+pysqlite does *not* support Decimal objects natively, and SQLAlchemy must convert from floating point - rounding errors and other issues may occur. Please consider storing Decimal numbers as strings or integers on this platform for lossless storage.\n",
" sample_rows = connection.execute(command)\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[32;1m\u001b[1;3m SELECT COUNT(*) FROM Employee;\u001b[0m\n",
"SQLResult: \u001b[33;1m\u001b[1;3m[(8,)]\u001b[0m\n",
"Answer:\u001b[32;1m\u001b[1;3m There are 8 employees.\u001b[0m\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
@ -285,16 +300,16 @@
"\n",
"\u001b[1m> Entering new SQLDatabaseChain chain...\u001b[0m\n",
"What are some example tracks by composer Johann Sebastian Bach? \n",
"SQLQuery:\u001b[32;1m\u001b[1;3m SELECT Name, Composer FROM Track WHERE Composer = 'Johann Sebastian Bach' LIMIT 3;\u001b[0m\n",
"SQLQuery:\u001b[32;1m\u001b[1;3m SELECT Name, Composer FROM Track WHERE Composer LIKE '%Johann Sebastian Bach%' LIMIT 3;\u001b[0m\n",
"SQLResult: \u001b[33;1m\u001b[1;3m[('Concerto for 2 Violins in D Minor, BWV 1043: I. Vivace', 'Johann Sebastian Bach'), ('Aria Mit 30 Veränderungen, BWV 988 \"Goldberg Variations\": Aria', 'Johann Sebastian Bach'), ('Suite for Solo Cello No. 1 in G Major, BWV 1007: I. Prélude', 'Johann Sebastian Bach')]\u001b[0m\n",
"Answer:\u001b[32;1m\u001b[1;3m Examples of tracks by Johann Sebastian Bach include 'Concerto for 2 Violins in D Minor, BWV 1043: I. Vivace', 'Aria Mit 30 Veränderungen, BWV 988 \"Goldberg Variations\": Aria', and 'Suite for Solo Cello No. 1 in G Major, BWV 1007: I. Prélude'.\u001b[0m\n",
"Answer:\u001b[32;1m\u001b[1;3m Some example tracks by composer Johann Sebastian Bach are 'Concerto for 2 Violins in D Minor, BWV 1043: I. Vivace', 'Aria Mit 30 Veränderungen, BWV 988 \"Goldberg Variations\": Aria', and 'Suite for Solo Cello No. 1 in G Major, BWV 1007: I. Prélude'.\u001b[0m\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"' Examples of tracks by Johann Sebastian Bach include \\'Concerto for 2 Violins in D Minor, BWV 1043: I. Vivace\\', \\'Aria Mit 30 Veränderungen, BWV 988 \"Goldberg Variations\": Aria\\', and \\'Suite for Solo Cello No. 1 in G Major, BWV 1007: I. Prélude\\'.'"
"' Some example tracks by composer Johann Sebastian Bach are \\'Concerto for 2 Violins in D Minor, BWV 1043: I. Vivace\\', \\'Aria Mit 30 Veränderungen, BWV 988 \"Goldberg Variations\": Aria\\', and \\'Suite for Solo Cello No. 1 in G Major, BWV 1007: I. Prélude\\'.'"
]
},
"execution_count": 11,
@ -346,7 +361,25 @@
"name": "stdout",
"output_type": "stream",
"text": [
"Table 'Track' has columns: TrackId (INTEGER), Name (NVARCHAR(200)), AlbumId (INTEGER), MediaTypeId (INTEGER), GenreId (INTEGER), Composer (NVARCHAR(220)), Milliseconds (INTEGER), Bytes (INTEGER), UnitPrice (NUMERIC(10, 2)). Here is an example of 2 rows from this table (long strings are truncated):\n",
"\n",
"CREATE TABLE \"Track\" (\n",
"\t\"TrackId\" INTEGER NOT NULL, \n",
"\t\"Name\" NVARCHAR(200) NOT NULL, \n",
"\t\"AlbumId\" INTEGER, \n",
"\t\"MediaTypeId\" INTEGER NOT NULL, \n",
"\t\"GenreId\" INTEGER, \n",
"\t\"Composer\" NVARCHAR(220), \n",
"\t\"Milliseconds\" INTEGER NOT NULL, \n",
"\t\"Bytes\" INTEGER, \n",
"\t\"UnitPrice\" NUMERIC(10, 2) NOT NULL, \n",
"\tPRIMARY KEY (\"TrackId\"), \n",
"\tFOREIGN KEY(\"MediaTypeId\") REFERENCES \"MediaType\" (\"MediaTypeId\"), \n",
"\tFOREIGN KEY(\"GenreId\") REFERENCES \"Genre\" (\"GenreId\"), \n",
"\tFOREIGN KEY(\"AlbumId\") REFERENCES \"Album\" (\"AlbumId\")\n",
")\n",
"\n",
"SELECT * FROM 'Track' LIMIT 2;\n",
"TrackId Name AlbumId MediaTypeId GenreId Composer Milliseconds Bytes UnitPrice\n",
"1 For Those About To Rock (We Salute You) 1 1 1 Angus Young, Malcolm Young, Brian Johnson 343719 11170334 0.99\n",
"2 Balls to the Wall 2 2 1 None 342562 5510424 0.99\n"
]
@ -380,8 +413,8 @@
"\n",
"\u001b[1m> Entering new SQLDatabaseChain chain...\u001b[0m\n",
"What are some example tracks by Bach? \n",
"SQLQuery:\u001b[32;1m\u001b[1;3m SELECT Name, Composer FROM Track WHERE Composer LIKE '%Bach%' LIMIT 5;\u001b[0m\n",
"SQLResult: \u001b[33;1m\u001b[1;3m[('American Woman', 'B. Cummings/G. Peterson/M.J. Kale/R. Bachman'), ('Concerto for 2 Violins in D Minor, BWV 1043: I. Vivace', 'Johann Sebastian Bach'), ('Aria Mit 30 Veränderungen, BWV 988 \"Goldberg Variations\": Aria', 'Johann Sebastian Bach'), ('Suite for Solo Cello No. 1 in G Major, BWV 1007: I. Prélude', 'Johann Sebastian Bach'), ('Toccata and Fugue in D Minor, BWV 565: I. Toccata', 'Johann Sebastian Bach')]\u001b[0m\n",
"SQLQuery:\u001b[32;1m\u001b[1;3m SELECT Name FROM Track WHERE Composer LIKE '%Bach%' LIMIT 5;\u001b[0m\n",
"SQLResult: \u001b[33;1m\u001b[1;3m[('American Woman',), ('Concerto for 2 Violins in D Minor, BWV 1043: I. Vivace',), ('Aria Mit 30 Veränderungen, BWV 988 \"Goldberg Variations\": Aria',), ('Suite for Solo Cello No. 1 in G Major, BWV 1007: I. Prélude',), ('Toccata and Fugue in D Minor, BWV 565: I. Toccata',)]\u001b[0m\n",
"Answer:\u001b[32;1m\u001b[1;3m Some example tracks by Bach are 'American Woman', 'Concerto for 2 Violins in D Minor, BWV 1043: I. Vivace', 'Aria Mit 30 Veränderungen, BWV 988 \"Goldberg Variations\": Aria', 'Suite for Solo Cello No. 1 in G Major, BWV 1007: I. Prélude', and 'Toccata and Fugue in D Minor, BWV 565: I. Toccata'.\u001b[0m\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
@ -420,17 +453,18 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 20,
"id": "e59a4740",
"metadata": {},
"outputs": [],
"source": [
"from langchain.chains import SQLDatabaseSequentialChain"
"from langchain.chains import SQLDatabaseSequentialChain\n",
"db = SQLDatabase.from_uri(\"sqlite:///../../../../notebooks/Chinook.db\")"
]
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 21,
"id": "58bb49b6",
"metadata": {},
"outputs": [],
@ -440,7 +474,7 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 22,
"id": "95017b1a",
"metadata": {},
"outputs": [
@ -456,9 +490,9 @@
"\n",
"\u001b[1m> Entering new SQLDatabaseChain chain...\u001b[0m\n",
"How many employees are also customers? \n",
"SQLQuery:\u001b[32;1m\u001b[1;3m SELECT COUNT(*) FROM Customer c INNER JOIN Employee e ON c.SupportRepId = e.EmployeeId;\u001b[0m\n",
"SQLQuery:\u001b[32;1m\u001b[1;3m SELECT COUNT(*) FROM Employee INNER JOIN Customer ON Employee.EmployeeId = Customer.SupportRepId;\u001b[0m\n",
"SQLResult: \u001b[33;1m\u001b[1;3m[(59,)]\u001b[0m\n",
"Answer:\u001b[32;1m\u001b[1;3m There are 59 employees who are also customers.\u001b[0m\n",
"Answer:\u001b[32;1m\u001b[1;3m 59 employees are also customers.\u001b[0m\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
@ -467,10 +501,10 @@
{
"data": {
"text/plain": [
"' There are 59 employees who are also customers.'"
"' 59 employees are also customers.'"
]
},
"execution_count": 5,
"execution_count": 22,
"metadata": {},
"output_type": "execute_result"
}
@ -478,6 +512,14 @@
"source": [
"chain.run(\"How many employees are also customers?\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "5eb39db6",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
@ -500,7 +542,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.2"
"version": "3.9.1"
}
},
"nbformat": 4,

View File

@ -9,13 +9,13 @@
"In this tutorial, we will learn about creating simple chains in LangChain. We will learn how to create a chain, add components to it, and run it.\n",
"\n",
"In this tutorial, we will cover:\n",
"- Using the simple LLM chain\n",
"- Using a simple LLM chain\n",
"- Creating sequential chains\n",
"- Creating a custom chain\n",
"\n",
"## Why do we need chains?\n",
"\n",
"Chains allow us to combine multiple components together to create a single, coherent application. For example, we can create a chain that takes user input, format it with a PromptTemplate, and then passes the formatted response to an LLM. We can build more complex chains by combining multiple chains together, or by combining chains with other components.\n"
"Chains allow us to combine multiple components together to create a single, coherent application. For example, we can create a chain that takes user input, formats it with a PromptTemplate, and then passes the formatted response to an LLM. We can build more complex chains by combining multiple chains together, or by combining chains with other components.\n"
]
},
{
@ -88,7 +88,7 @@
"source": [
"## Combine chains with the `SequentialChain`\n",
"\n",
"The next step after calling a language model is make a series of calls to a language model. We can do this using sequential chains, which are chains that execute their links in a predefined order. Specifically, we will use the `SimpleSequentialChain`. This is the simplest form of sequential chains, where each step has a singular input/output, and the output of one step is the input to the next.\n",
"The next step after calling a language model is to make a series of calls to a language model. We can do this using sequential chains, which are chains that execute their links in a predefined order. Specifically, we will use the `SimpleSequentialChain`. This is the simplest type of a sequential chain, where each step has a single input/output, and the output of one step is the input to the next.\n",
"\n",
"In this tutorial, our sequential chain will:\n",
"1. First, create a company name for a product. We will reuse the `LLMChain` we'd previously initialized to create this company name.\n",
@ -156,7 +156,7 @@
"source": [
"## Create a custom chain with the `Chain` class\n",
"\n",
"LangChain provides many chains out of the box, but sometimes you may want to create a custom chains for your specific use case. For this example, we will create a custom chain that concatenates the outputs of 2 `LLMChain`s.\n",
"LangChain provides many chains out of the box, but sometimes you may want to create a custom chain for your specific use case. For this example, we will create a custom chain that concatenates the outputs of 2 `LLMChain`s.\n",
"\n",
"In order to create a custom chain:\n",
"1. Start by subclassing the `Chain` class,\n",

View File

@ -7,9 +7,8 @@ The examples here are all end-to-end chains for specific applications.
They are broken up into three categories:
1. `Generic Chains <./generic_how_to.html>`_: Generic chains, that are meant to help build other chains rather than serve a particular purpose.
2. `CombineDocuments Chains <./combine_docs_how_to.html>`_: Chains aimed at making it easy to work with documents (question answering, summarization, etc).
3. `Utility Chains <./utility_how_to.html>`_: Chains consisting of an LLMChain interacting with a specific util.
4. `Asynchronous <./async_chain.html>`_: Covering asynchronous functionality.
2. `Utility Chains <./utility_how_to.html>`_: Chains consisting of an LLMChain interacting with a specific util.
3. `Asynchronous <./async_chain.html>`_: Covering asynchronous functionality.
.. toctree::
:maxdepth: 1
@ -17,8 +16,8 @@ They are broken up into three categories:
:hidden:
./generic_how_to.rst
./combine_docs_how_to.rst
./utility_how_to.rst
./async_chain.ipynb
In addition to different types of chains, we also have the following how-to guides for working with chains in general:

View File

@ -6,16 +6,6 @@ They vary greatly in complexity and are combination of generic, highly configura
## Sequential Chain
This is a specific type of chain where multiple other chains are run in sequence, with the outputs being added as inputs
to the next. A subtype of this type of chain is the `SimpleSequentialChain`, where all subchains have only one input and one output,
to the next. A subtype of this type of chain is the [`SimpleSequentialChain`](./generic/sequential_chains.html#simplesequentialchain), where all subchains have only one input and one output,
and the output of one is therefore used as sole input to the next chain.
## CombineDocuments Chains
These are a subset of chains designed to work with documents. There are two pieces to consider:
1. The underlying chain method (eg, how the documents are combined)
2. Use cases for these types of chains.
For the first, please see [this documentation](combine_docs.md) for more detailed information on the types of chains LangChain supports.
For the second, please see the Use Cases section for more information on [question answering](/use_cases/question_answering.md),
[question answering with sources](/use_cases/qa_with_sources.md), and [summarization](/use_cases/summarization.md).

View File

@ -0,0 +1,116 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "9f98a15e",
"metadata": {},
"source": [
"# CoNLL-U\n",
"This is an example of how to load a file in [CoNLL-U](https://universaldependencies.org/format.html) format. The whole file is treated as one document. The example data (`conllu.conllu`) is based on one of the standard UD/CoNLL-U examples."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d9b2e33e",
"metadata": {},
"outputs": [],
"source": [
"from langchain.document_loaders import CoNLLULoader"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "5b5eec48",
"metadata": {},
"outputs": [],
"source": [
"loader = CoNLLULoader(\"example_data/conllu.conllu\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "10f3f725",
"metadata": {},
"outputs": [],
"source": [
"document = loader.load()"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "acbb3579",
"metadata": {},
"outputs": [],
"source": [
"document"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.8"
},
"toc": {
"base_numbering": 1,
"nav_menu": {},
"number_sections": true,
"sideBar": true,
"skip_h1_title": false,
"title_cell": "Table of Contents",
"title_sidebar": "Contents",
"toc_cell": false,
"toc_position": {},
"toc_section_display": true,
"toc_window_display": false
},
"varInspector": {
"cols": {
"lenName": 16,
"lenType": 16,
"lenVar": 40
},
"kernels_config": {
"python": {
"delete_cmd_postfix": "",
"delete_cmd_prefix": "del ",
"library": "var_list.py",
"varRefreshCmd": "print(var_dic_list())"
},
"r": {
"delete_cmd_postfix": ") ",
"delete_cmd_prefix": "rm(",
"library": "var_list.r",
"varRefreshCmd": "cat(var_dic_list()) "
}
},
"types_to_exclude": [
"module",
"function",
"builtin_function_or_method",
"instance",
"_Feature"
],
"window_display": false
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@ -0,0 +1,102 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "d9826810",
"metadata": {},
"source": [
"# Copy Paste\n",
"\n",
"This notebook covers how to load a document object from something you just want to copy and paste. In this case, you don't even need to use a DocumentLoader, but rather can just construct the Document directly."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "fd9e71a2",
"metadata": {},
"outputs": [],
"source": [
"from langchain.docstore.document import Document"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "f40d3f30",
"metadata": {},
"outputs": [],
"source": [
"text = \"..... put the text you copy pasted here......\""
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "d409bdba",
"metadata": {},
"outputs": [],
"source": [
"doc = Document(page_content=text)"
]
},
{
"cell_type": "markdown",
"id": "cc0eff72",
"metadata": {},
"source": [
"## Metadata\n",
"If you want to add metadata about the where you got this piece of text, you easily can with the metadata key."
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "fe3aa5aa",
"metadata": {},
"outputs": [],
"source": [
"metadata = {\"source\": \"internet\", \"date\": \"Friday\"}"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "827d4e91",
"metadata": {},
"outputs": [],
"source": [
"doc = Document(page_content=text, metadata=metadata)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c986a43d",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@ -6,7 +6,7 @@
"metadata": {},
"source": [
"# Directory Loader\n",
"This covers how to use the DirectoryLoader to load all documents in a directory. Under the hood, this uses the [UnstructuredLoader](./unstructured_file.ipynb)"
"This covers how to use the DirectoryLoader to load all documents in a directory. Under the hood, by default this uses the [UnstructuredLoader](./unstructured_file.ipynb)"
]
},
{
@ -29,7 +29,7 @@
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": 2,
"id": "891fe56f",
"metadata": {},
"outputs": [],
@ -39,7 +39,7 @@
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": 3,
"id": "addfe9cf",
"metadata": {},
"outputs": [],
@ -49,7 +49,7 @@
},
{
"cell_type": "code",
"execution_count": 9,
"execution_count": 4,
"id": "b042086d",
"metadata": {},
"outputs": [
@ -59,7 +59,67 @@
"1"
]
},
"execution_count": 9,
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"len(docs)"
]
},
{
"cell_type": "markdown",
"id": "c5652850",
"metadata": {},
"source": [
"## Change loader class\n",
"By default this uses the UnstructuredLoader class. However, you can change up the type of loader pretty easily."
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "81c92da3",
"metadata": {},
"outputs": [],
"source": [
"from langchain.document_loaders import TextLoader"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "ab38ee36",
"metadata": {},
"outputs": [],
"source": [
"loader = DirectoryLoader('../', glob=\"**/*.md\", loader_cls=TextLoader)"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "25c8740f",
"metadata": {},
"outputs": [],
"source": [
"docs = loader.load()"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "38337763",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"1"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
@ -71,7 +131,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "cbc8256b",
"id": "984c8429",
"metadata": {},
"outputs": [],
"source": []
@ -93,7 +153,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.9"
"version": "3.9.1"
}
},
"nbformat": 4,

View File

@ -5,9 +5,9 @@
"id": "56ac1584",
"metadata": {},
"source": [
"# EveryNote\n",
"# EverNote\n",
"\n",
"How to load EveryNote file from disk."
"How to load EverNote file from disk."
]
},
{
@ -41,9 +41,9 @@
}
],
"source": [
"from langchain.document_loaders import EveryNoteLoader\n",
"from langchain.document_loaders import EverNoteLoader\n",
"\n",
"loader = EveryNoteLoader(\"example_data/testing.enex\")\n",
"loader = EverNoteLoader(\"example_data/testing.enex\")\n",
"loader.load()"
]
},

View File

@ -0,0 +1,8 @@
# sent_id = 1
# text = They buy and sell books.
1 They they PRON PRP Case=Nom|Number=Plur 2 nsubj 2:nsubj|4:nsubj _
2 buy buy VERB VBP Number=Plur|Person=3|Tense=Pres 0 root 0:root _
3 and and CONJ CC _ 4 cc 4:cc _
4 sell sell VERB VBP Number=Plur|Person=3|Tense=Pres 2 conj 0:root|2:conj _
5 books book NOUN NNS Number=Plur 2 obj 2:obj|4:obj SpaceAfter=No
6 . . PUNCT . _ 2 punct 2:punct _

View File

@ -0,0 +1,64 @@
{
"participants": [{"name": "User 1"}, {"name": "User 2"}],
"messages": [
{"sender_name": "User 2", "timestamp_ms": 1675597571851, "content": "Bye!"},
{
"sender_name": "User 1",
"timestamp_ms": 1675597435669,
"content": "Oh no worries! Bye",
},
{
"sender_name": "User 2",
"timestamp_ms": 1675596277579,
"content": "No Im sorry it was my mistake, the blue one is not for sale",
},
{
"sender_name": "User 1",
"timestamp_ms": 1675595140251,
"content": "I thought you were selling the blue one!",
},
{
"sender_name": "User 1",
"timestamp_ms": 1675595109305,
"content": "Im not interested in this bag. Im interested in the blue one!",
},
{
"sender_name": "User 2",
"timestamp_ms": 1675595068468,
"content": "Here is $129",
},
{
"sender_name": "User 2",
"timestamp_ms": 1675595060730,
"photos": [
{"uri": "url_of_some_picture.jpg", "creation_timestamp": 1675595059}
],
},
{
"sender_name": "User 2",
"timestamp_ms": 1675595045152,
"content": "Online is at least $100",
},
{
"sender_name": "User 1",
"timestamp_ms": 1675594799696,
"content": "How much do you want?",
},
{
"sender_name": "User 2",
"timestamp_ms": 1675577876645,
"content": "Goodmorning! $50 is too low.",
},
{
"sender_name": "User 1",
"timestamp_ms": 1675549022673,
"content": "Hi! Im interested in your bag. Im offering $50. Let me know if you are interested. Thanks!",
},
],
"title": "User 1 and User 2 chat",
"is_still_participant": true,
"thread_path": "inbox/User 1 and User 2 chat",
"magic_words": [],
"image": {"uri": "image_of_the_chat.jpg", "creation_timestamp": 1675549016},
"joinable_mode": {"mode": 1, "link": ""},
}

View File

@ -0,0 +1,83 @@
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"# Notebook\n",
"\n",
"This notebook covers how to load data from an .ipynb notebook into a format suitable by LangChain."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain.document_loaders import NotebookLoader"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"loader = NotebookLoader(\"example_data/notebook.ipynb\")"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"`NotebookLoader.load()` loads the `.ipynb` notebook file into a `Document` object.\n",
"\n",
"**Parameters**:\n",
"\n",
"* `include_outputs` (bool): whether to include cell outputs in the resulting document (default is False).\n",
"* `max_output_length` (int): the maximum number of characters to include from each cell output (default is 10).\n",
"* `remove_newline` (bool): whether to remove newline characters from the cell sources and outputs (default is False).\n",
"* `traceback` (bool): whether to include full traceback (default is False)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"loader.load(include_outputs=True, max_output_length=20, remove_newline=True)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.1"
},
"orig_nbformat": 4,
"vscode": {
"interpreter": {
"hash": "981b6680a42bdb5eb22187741e1607b3aae2cf73db800d1af1f268d1de6a1f70"
}
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@ -0,0 +1,31 @@
{
"name": "Grace 🧤",
"type": "personal_chat",
"id": 2730825451,
"messages": [
{
"id": 1980499,
"type": "message",
"date": "2020-01-01T00:00:02",
"from": "Henry",
"from_id": 4325636679,
"text": "It's 2020..."
},
{
"id": 1980500,
"type": "message",
"date": "2020-01-01T00:00:04",
"from": "Henry",
"from_id": 4325636679,
"text": "Fireworks!"
},
{
"id": 1980501,
"type": "message",
"date": "2020-01-01T00:00:05",
"from": "Grace 🧤 🍒",
"from_id": 4720225552,
"text": "You're a minute late!"
}
]
}

View File

@ -0,0 +1,77 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Facebook Chat\n",
"\n",
"This notebook covers how to load data from the Facebook Chats into a format that can be ingested into LangChain."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"from langchain.document_loaders import FacebookChatLoader"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"loader = FacebookChatLoader(\"example_data/facebook_chat.json\")"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[Document(page_content='User 2 on 2023-02-05 12:46:11: Bye!\\n\\nUser 1 on 2023-02-05 12:43:55: Oh no worries! Bye\\n\\nUser 2 on 2023-02-05 12:24:37: No Im sorry it was my mistake, the blue one is not for sale\\n\\nUser 1 on 2023-02-05 12:05:40: I thought you were selling the blue one!\\n\\nUser 1 on 2023-02-05 12:05:09: Im not interested in this bag. Im interested in the blue one!\\n\\nUser 2 on 2023-02-05 12:04:28: Here is $129\\n\\nUser 2 on 2023-02-05 12:04:05: Online is at least $100\\n\\nUser 1 on 2023-02-05 11:59:59: How much do you want?\\n\\nUser 2 on 2023-02-05 07:17:56: Goodmorning! $50 is too low.\\n\\nUser 1 on 2023-02-04 23:17:02: Hi! Im interested in your bag. Im offering $50. Let me know if you are interested. Thanks!\\n\\n', lookup_str='', metadata={'source': 'docs/modules/document_loaders/examples/example_data/facebook_chat.json'}, lookup_index=0)]"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"loader.load()"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.1"
},
"vscode": {
"interpreter": {
"hash": "384707f4965e853a82006e90614c2e1a578ea1f6eb0ee07a1dd78a657d37dd67"
}
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@ -0,0 +1,191 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "4babfba5",
"metadata": {},
"source": [
"# GitBook\n",
"How to pull page data from any GitBook."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "ff49b177",
"metadata": {},
"outputs": [],
"source": [
"from langchain.document_loaders import GitbookLoader"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "849a8d52",
"metadata": {},
"outputs": [],
"source": [
"loader = GitbookLoader(\"https://docs.gitbook.com\")"
]
},
{
"cell_type": "markdown",
"id": "65d5ddce",
"metadata": {},
"source": [
"### Load from single GitBook page"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "c2826836",
"metadata": {},
"outputs": [],
"source": [
"page_data = loader.load()"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "fefa2adc",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[Document(page_content='Introduction to GitBook\\nGitBook is a modern documentation platform where teams can document everything from products to internal knowledge bases and APIs.\\nWe want to help \\nteams to work more efficiently\\n by creating a simple yet powerful platform for them to \\nshare their knowledge\\n.\\nOur mission is to make a \\nuser-friendly\\n and \\ncollaborative\\n product for everyone to create, edit and share knowledge through documentation.\\nPublish your documentation in 5 easy steps\\nImport\\n\\nMove your existing content to GitBook with ease.\\nGit Sync\\n\\nBenefit from our bi-directional synchronisation with GitHub and GitLab.\\nOrganise your content\\n\\nCreate pages and spaces and organize them into collections\\nCollaborate\\n\\nInvite other users and collaborate asynchronously with ease.\\nPublish your docs\\n\\nShare your documentation with selected users or with everyone.\\nNext\\n - Getting started\\nOverview\\nLast modified \\n3mo ago', lookup_str='', metadata={'source': 'https://docs.gitbook.com', 'title': 'Introduction to GitBook'}, lookup_index=0)]"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"page_data"
]
},
{
"cell_type": "markdown",
"id": "c325048c",
"metadata": {},
"source": [
"### Load from all paths in a given GitBook\n",
"For this to work, the GitbookLoader needs to be initialized with the root path (`https://docs.gitbook.com` in this example) and have `load_all_paths` set to `True`."
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "938ff4ee",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Fetching text from https://docs.gitbook.com/\n",
"Fetching text from https://docs.gitbook.com/getting-started/overview\n",
"Fetching text from https://docs.gitbook.com/getting-started/import\n",
"Fetching text from https://docs.gitbook.com/getting-started/git-sync\n",
"Fetching text from https://docs.gitbook.com/getting-started/content-structure\n",
"Fetching text from https://docs.gitbook.com/getting-started/collaboration\n",
"Fetching text from https://docs.gitbook.com/getting-started/publishing\n",
"Fetching text from https://docs.gitbook.com/tour/quick-find\n",
"Fetching text from https://docs.gitbook.com/tour/editor\n",
"Fetching text from https://docs.gitbook.com/tour/customization\n",
"Fetching text from https://docs.gitbook.com/tour/member-management\n",
"Fetching text from https://docs.gitbook.com/tour/pdf-export\n",
"Fetching text from https://docs.gitbook.com/tour/activity-history\n",
"Fetching text from https://docs.gitbook.com/tour/insights\n",
"Fetching text from https://docs.gitbook.com/tour/notifications\n",
"Fetching text from https://docs.gitbook.com/tour/internationalization\n",
"Fetching text from https://docs.gitbook.com/tour/keyboard-shortcuts\n",
"Fetching text from https://docs.gitbook.com/tour/seo\n",
"Fetching text from https://docs.gitbook.com/advanced-guides/custom-domain\n",
"Fetching text from https://docs.gitbook.com/advanced-guides/advanced-sharing-and-security\n",
"Fetching text from https://docs.gitbook.com/advanced-guides/integrations\n",
"Fetching text from https://docs.gitbook.com/billing-and-admin/account-settings\n",
"Fetching text from https://docs.gitbook.com/billing-and-admin/plans\n",
"Fetching text from https://docs.gitbook.com/troubleshooting/faqs\n",
"Fetching text from https://docs.gitbook.com/troubleshooting/hard-refresh\n",
"Fetching text from https://docs.gitbook.com/troubleshooting/report-bugs\n",
"Fetching text from https://docs.gitbook.com/troubleshooting/connectivity-issues\n",
"Fetching text from https://docs.gitbook.com/troubleshooting/support\n"
]
}
],
"source": [
"loader = GitbookLoader(\"https://docs.gitbook.com\", load_all_paths=True)\n",
"all_pages_data = loader.load()"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "db92fc39",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"fetched 28 documents.\n"
]
},
{
"data": {
"text/plain": [
"Document(page_content=\"Import\\nFind out how to easily migrate your existing documentation and which formats are supported.\\nThe import function allows you to migrate and unify existing documentation in GitBook. You can choose to import single or multiple pages although limits apply. \\nPermissions\\nAll members with editor permission or above can use the import feature.\\nSupported formats\\nGitBook supports imports from websites or files that are:\\nMarkdown (.md or .markdown)\\nHTML (.html)\\nMicrosoft Word (.docx).\\nWe also support import from:\\nConfluence\\nNotion\\nGitHub Wiki\\nQuip\\nDropbox Paper\\nGoogle Docs\\nYou can also upload a ZIP\\n \\ncontaining HTML or Markdown files when \\nimporting multiple pages.\\nNote: this feature is in beta.\\nFeel free to suggest import sources we don't support yet and \\nlet us know\\n if you have any issues.\\nImport panel\\nWhen you create a new space, you'll have the option to import content straight away:\\nThe new page menu\\nImport a page or subpage by selecting \\nImport Page\\n from the New Page menu, or \\nImport Subpage\\n in the page action menu, found in the table of contents:\\nImport from the page action menu\\nWhen you choose your input source, instructions will explain how to proceed.\\nAlthough GitBook supports importing content from different kinds of sources, the end result might be different from your source due to differences in product features and document format.\\nLimits\\nGitBook currently has the following limits for imported content:\\nThe maximum number of pages that can be uploaded in a single import is \\n20.\\nThe maximum number of files (images etc.) that can be uploaded in a single import is \\n20.\\nGetting started - \\nPrevious\\nOverview\\nNext\\n - Getting started\\nGit Sync\\nLast modified \\n4mo ago\", lookup_str='', metadata={'source': 'https://docs.gitbook.com/getting-started/import', 'title': 'Import'}, lookup_index=0)"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"print(f\"fetched {len(all_pages_data)} documents.\")\n",
"# show second document\n",
"all_pages_data[2]"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "92cb3eda",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
},
"vscode": {
"interpreter": {
"hash": "2d002ec47225e662695b764370d7966aa11eeb4302edc2f497bbf96d49c8f899"
}
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@ -0,0 +1,101 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "4babfba5",
"metadata": {},
"source": [
"# Hacker News\n",
"How to pull page data and comments from Hacker News"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "ff49b177",
"metadata": {},
"outputs": [],
"source": [
"from langchain.document_loaders import HNLoader"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "849a8d52",
"metadata": {},
"outputs": [],
"source": [
"loader = HNLoader(\"https://news.ycombinator.com/item?id=34817881\")"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "c2826836",
"metadata": {},
"outputs": [],
"source": [
"data = loader.load()"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "fefa2adc",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[Document(page_content=\"delta_p_delta_x 18 hours ago \\n | next [] \\n\\nAstrophysical and cosmological simulations are often insightful. They're also very cross-disciplinary; besides the obvious astrophysics, there's networking and sysadmin, parallel computing and algorithm theory (so that the simulation programs are actually fast but still accurate), systems design, and even a bit of graphic design for the visualisations.Some of my favourite simulation projects:- IllustrisTNG: https://www.tng-project.org/- SWIFT: https://swift.dur.ac.uk/- CO5BOLD: https://www.astro.uu.se/~bf/co5bold_main.html (which produced these animations of a red-giant star: https://www.astro.uu.se/~bf/movie/AGBmovie.html)- AbacusSummit: https://abacussummit.readthedocs.io/en/latest/And I can add the simulations in the article, too.\\n \\nreply\", lookup_str='', metadata={'source': 'https://news.ycombinator.com/item?id=34817881', 'title': 'What Lights the Universes Standard Candles?'}, lookup_index=0),\n",
" Document(page_content=\"andrewflnr 19 hours ago \\n | prev | next [] \\n\\nWhoa. I didn't know the accretion theory of Ia supernovae was dead, much less that it had been since 2011.\\n \\nreply\", lookup_str='', metadata={'source': 'https://news.ycombinator.com/item?id=34817881', 'title': 'What Lights the Universes Standard Candles?'}, lookup_index=0),\n",
" Document(page_content='andreareina 18 hours ago \\n | prev | next [] \\n\\nThis seems to be the paper https://academic.oup.com/mnras/article/517/4/5260/6779709\\n \\nreply', lookup_str='', metadata={'source': 'https://news.ycombinator.com/item?id=34817881', 'title': 'What Lights the Universes Standard Candles?'}, lookup_index=0),\n",
" Document(page_content=\"andreareina 18 hours ago \\n | prev [] \\n\\nWouldn't double detonation show up as variance in the brightness?\\n \\nreply\", lookup_str='', metadata={'source': 'https://news.ycombinator.com/item?id=34817881', 'title': 'What Lights the Universes Standard Candles?'}, lookup_index=0)]"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"data"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "938ff4ee",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
},
"vscode": {
"interpreter": {
"hash": "c05c795047059754c96cf5f30fd1289e4658e92c92d00704a3cddb24e146e3ef"
}
}
},
"nbformat": 4,
"nbformat_minor": 5
}

File diff suppressed because one or more lines are too long

View File

@ -0,0 +1,145 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "f70e6118",
"metadata": {},
"source": [
"# Images\n",
"\n",
"This covers how to load images such as JPGs PNGs into a document format that we can use downstream."
]
},
{
"cell_type": "markdown",
"id": "09d64998",
"metadata": {},
"source": [
"## Using Unstructured"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "0cc0cd42",
"metadata": {},
"outputs": [],
"source": [
"from langchain.document_loaders.image import UnstructuredImageLoader"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "082d557c",
"metadata": {},
"outputs": [],
"source": [
"loader = UnstructuredImageLoader(\"layout-parser-paper-fast.jpg\")"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "df11c953",
"metadata": {},
"outputs": [],
"source": [
"data = loader.load()"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "4284d44c",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Document(page_content=\"LayoutParser: A Unified Toolkit for Deep\\nLearning Based Document Image Analysis\\n\\n\\nZxjiang Shen' (F3}, Ruochen Zhang”, Melissa Dell*, Benjamin Charles Germain\\nLeet, Jacob Carlson, and Weining LiF\\n\\n\\nsugehen\\n\\nshangthrows, et\\n\\n“Abstract. Recent advanocs in document image analysis (DIA) have been\\npimarliy driven bythe application of neural networks dell roar\\n{uteomer could be aly deployed in production and extended fo farther\\n[nvetigtion. However, various factory ke lcely organize codebanee\\nsnd sophisticated modal cnigurations compat the ey ree of\\nerin! innovation by wide sence, Though there have been sng\\nHors to improve reuablty and simplify deep lees (DL) mode\\naon, sone of them ae optimized for challenge inthe demain of DIA,\\nThis roprscte a major gap in the extng fol, sw DIA i eal to\\nscademic research acon wie range of dpi in the social ssencee\\n[rary for streamlining the sage of DL in DIA research and appicn\\ntons The core LayoutFaraer brary comes with a sch of simple and\\nIntative interfaee or applying and eutomiing DI. odel fr Inyo de\\npltfom for sharing both protrined modes an fal document dist\\n{ation pipeline We demonutate that LayootPareer shea fr both\\nlightweight and lrgeseledgtieation pipelines in eal-word uae ces\\nThe leary pblely smal at Btspe://layost-pareergsthab So\\n\\n\\n\\nKeywords: Document Image Analysis» Deep Learning Layout Analysis\\nCharacter Renguition - Open Serres dary « Tol\\n\\n\\nIntroduction\\n\\n\\nDeep Learning(DL)-based approaches are the state-of-the-art for a wide range of\\ndoctiment image analysis (DIA) tea including document image clasiffeation [I]\\n\", lookup_str='', metadata={'source': 'layout-parser-paper-fast.jpg'}, lookup_index=0)"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"data[0]"
]
},
{
"cell_type": "markdown",
"id": "09957371",
"metadata": {},
"source": [
"### Retain Elements\n",
"\n",
"Under the hood, Unstructured creates different \"elements\" for different chunks of text. By default we combine those together, but you can easily keep that separation by specifying `mode=\"elements\"`."
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "0fab833b",
"metadata": {},
"outputs": [],
"source": [
"loader = UnstructuredImageLoader(\"layout-parser-paper-fast.jpg\", mode=\"elements\")"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "c3e8ff1b",
"metadata": {},
"outputs": [],
"source": [
"data = loader.load()"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "43c23d2d",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Document(page_content='LayoutParser: A Unified Toolkit for Deep\\nLearning Based Document Image Analysis\\n', lookup_str='', metadata={'source': 'layout-parser-paper-fast.jpg', 'filename': 'layout-parser-paper-fast.jpg', 'page_number': 1, 'category': 'Title'}, lookup_index=0)"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"data[0]"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.13"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@ -0,0 +1,98 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Notebook\n",
"\n",
"This notebook covers how to load data from an .ipynb notebook into a format suitable by LangChain."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"from langchain.document_loaders import NotebookLoader"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [],
"source": [
"loader = NotebookLoader(\"example_data/notebook.ipynb\", include_outputs=True, max_output_length=20, remove_newline=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"`NotebookLoader.load()` loads the `.ipynb` notebook file into a `Document` object.\n",
"\n",
"**Parameters**:\n",
"\n",
"* `include_outputs` (bool): whether to include cell outputs in the resulting document (default is False).\n",
"* `max_output_length` (int): the maximum number of characters to include from each cell output (default is 10).\n",
"* `remove_newline` (bool): whether to remove newline characters from the cell sources and outputs (default is False).\n",
"* `traceback` (bool): whether to include full traceback (default is False)."
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[Document(page_content='\\'markdown\\' cell: \\'[\\'# Notebook\\', \\'\\', \\'This notebook covers how to load data from an .ipynb notebook into a format suitable by LangChain.\\']\\'\\n\\n \\'code\\' cell: \\'[\\'from langchain.document_loaders import NotebookLoader\\']\\'\\n\\n \\'code\\' cell: \\'[\\'loader = NotebookLoader(\"example_data/notebook.ipynb\")\\']\\'\\n\\n \\'markdown\\' cell: \\'[\\'`NotebookLoader.load()` loads the `.ipynb` notebook file into a `Document` object.\\', \\'\\', \\'**Parameters**:\\', \\'\\', \\'* `include_outputs` (bool): whether to include cell outputs in the resulting document (default is False).\\', \\'* `max_output_length` (int): the maximum number of characters to include from each cell output (default is 10).\\', \\'* `remove_newline` (bool): whether to remove newline characters from the cell sources and outputs (default is False).\\', \\'* `traceback` (bool): whether to include full traceback (default is False).\\']\\'\\n\\n \\'code\\' cell: \\'[\\'loader.load(include_outputs=True, max_output_length=20, remove_newline=True)\\']\\'\\n\\n', lookup_str='', metadata={'source': 'example_data/notebook.ipynb'}, lookup_index=0)]"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"loader.load()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
},
"vscode": {
"interpreter": {
"hash": "981b6680a42bdb5eb22187741e1607b3aae2cf73db800d1af1f268d1de6a1f70"
}
}
},
"nbformat": 4,
"nbformat_minor": 2
}

File diff suppressed because one or more lines are too long

View File

@ -0,0 +1,93 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "4bdaea79",
"metadata": {},
"source": [
"# Subtitle Files\n",
"How to load data from subtitle (`.srt`) files"
]
},
{
"cell_type": "code",
"execution_count": 24,
"id": "2cbb7f5c",
"metadata": {},
"outputs": [],
"source": [
"from langchain.document_loaders import SRTLoader"
]
},
{
"cell_type": "code",
"execution_count": 25,
"id": "865d8a14",
"metadata": {},
"outputs": [],
"source": [
"loader = SRTLoader(\"example_data/Star_Wars_The_Clone_Wars_S06E07_Crisis_at_the_Heart.srt\")"
]
},
{
"cell_type": "code",
"execution_count": 26,
"id": "173a9234",
"metadata": {},
"outputs": [],
"source": [
"docs = loader.load()"
]
},
{
"cell_type": "code",
"execution_count": 27,
"id": "15e00030",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'<i>Corruption discovered\\nat the core of the Banking Clan!</i> <i>Reunited, Rush Clovis\\nand Senator A'"
]
},
"execution_count": 27,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"docs[0].page_content[:100]"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3b7a8dc4",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@ -0,0 +1,84 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "33205b12",
"metadata": {},
"source": [
"# Telegram\n",
"\n",
"This notebook covers how to load data from Telegram into a format that can be ingested into LangChain."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "90b69c94",
"metadata": {},
"outputs": [],
"source": [
"from langchain.document_loaders import TelegramChatLoader"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "13deb0f5",
"metadata": {},
"outputs": [],
"source": [
"loader = TelegramChatLoader(\"example_data/telegram.json\")"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "9ccc1e2f",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[Document(page_content=\"Henry on 2020-01-01T00:00:02: It's 2020...\\n\\nHenry on 2020-01-01T00:00:04: Fireworks!\\n\\nGrace 🧤 ðŸ\\x8d on 2020-01-01T00:00:05: You're a minute late!\\n\\n\", lookup_str='', metadata={'source': 'example_data/telegram.json'}, lookup_index=0)]"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"loader.load()"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3e64cac2",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@ -17,7 +17,7 @@
"outputs": [],
"source": [
"# # Install package\n",
"!pip install unstructured[local-inference]\n",
"!pip install \"unstructured[local-inference]\"\n",
"!pip install \"detectron2@git+https://github.com/facebookresearch/detectron2.git@v0.6#egg=detectron2\"\n",
"!pip install layoutparser[layoutmodels,tesseract]"
]

View File

@ -0,0 +1,137 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "39af9ecd",
"metadata": {},
"source": [
"# Word Documents\n",
"\n",
"This covers how to load Word documents into a document format that we can use downstream."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "721c48aa",
"metadata": {},
"outputs": [],
"source": [
"from langchain.document_loaders import UnstructuredWordDocumentLoader"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "9d3d0e35",
"metadata": {},
"outputs": [],
"source": [
"loader = UnstructuredWordDocumentLoader(\"fake.docx\")"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "06073f91",
"metadata": {},
"outputs": [],
"source": [
"data = loader.load()"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "c9adc5cb",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': 'fake.docx'}, lookup_index=0)]"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"data"
]
},
{
"cell_type": "markdown",
"id": "525d6b67",
"metadata": {},
"source": [
"## Retain Elements\n",
"\n",
"Under the hood, Unstructured creates different \"elements\" for different chunks of text. By default we combine those together, but you can easily keep that separation by specifying `mode=\"elements\"`."
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "064f9162",
"metadata": {},
"outputs": [],
"source": [
"loader = UnstructuredWordDocumentLoader(\"fake.docx\", mode=\"elements\")"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "abefbbdb",
"metadata": {},
"outputs": [],
"source": [
"data = loader.load()"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "a547c534",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': 'fake.docx', 'filename': 'fake.docx', 'category': 'Title'}, lookup_index=0)"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"data[0]"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.13"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@ -27,10 +27,14 @@ There are a lot of different document loaders that LangChain supports. Below are
`Roam <./examples/roam.html>`_: A walkthrough of how to load data from a Roam file export.
`EveryNote <./examples/everynote.html>`_: A walkthrough of how to load data from a EveryNote (`.enex`) file.
`EverNote <./examples/evernote.html>`_: A walkthrough of how to load data from a EverNote (`.enex`) file.
`YouTube <./examples/youtube.html>`_: A walkthrough of how to load the transcript from a YouTube video.
`Hacker News <./examples/hn.html>`_: A walkthrough of how to load a Hacker News page.
`GitBook <./examples/gitbook.html>`_: A walkthrough of how to load a GitBook page.
`s3 File <./examples/s3_file.html>`_: A walkthrough of how to load a file from s3.
`s3 Directory <./examples/s3_directory.html>`_: A walkthrough of how to load all files in a directory from s3.
@ -53,6 +57,10 @@ There are a lot of different document loaders that LangChain supports. Below are
`Online PDF <./examples/online_pdf.html>`_: A walkthrough of how to load data from an online PDF.
`CoNLL-U <./examples/CoNLL-U.html>`_: A walkthrough of how to load data from a ConLL-U file.
`iFixit <./examples/ifixit.html>`_: A walkthrough of how to search and load data like guides, technical Q&A's, and device wikis from iFixit.com
.. toctree::
:maxdepth: 1
:glob:

25
docs/modules/indexes.rst Normal file
View File

@ -0,0 +1,25 @@
Indexes
==========================
Indexes refer to ways to structure documents so that LLMs can best interact with them.
This module contains utility functions for working with documents, different types of indexes, and then examples for using those indexes in chains.
LangChain provides common indices for working with data (most prominently support for vector databases).
For more complicated index structures, it is worth checking out `GPTIndex <https://gpt-index.readthedocs.io/en/latest/index.html>`_.
The following sections of documentation are provided:
- `Getting Started <./indexes/getting_started.html>`_: An overview of all the functionality LangChain provides for working with indexes.
- `Key Concepts <./indexes/key_concepts.html>`_: A conceptual guide going over the various concepts related to indexes and the tools needed to create them.
- `How-To Guides <./indexes/how_to_guides.html>`_: A collection of how-to guides. These highlight how to use all the relevant tools, the different types of vector databases, and how to use indexes in chains.
.. toctree::
:maxdepth: 1
:name: LLMs
:hidden:
./indexes/getting_started.ipynb
./indexes/key_concepts.md
./indexes/how_to_guides.rst

View File

@ -0,0 +1,550 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "134a0785",
"metadata": {},
"source": [
"# Chat Vector DB\n",
"\n",
"This notebook goes over how to set up a chain to chat with a vector database. The only difference between this chain and the [VectorDBQAChain](./vector_db_qa.ipynb) is that this allows for passing in of a chat history which can be used to allow for follow up questions."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "70c4e529",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain.embeddings.openai import OpenAIEmbeddings\n",
"from langchain.vectorstores import Chroma\n",
"from langchain.text_splitter import CharacterTextSplitter\n",
"from langchain.llms import OpenAI\n",
"from langchain.chains import ChatVectorDBChain"
]
},
{
"cell_type": "markdown",
"id": "cdff94be",
"metadata": {},
"source": [
"Load in documents. You can replace this with a loader for whatever type of data you want"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "01c46e92",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain.document_loaders import TextLoader\n",
"loader = TextLoader('../../state_of_the_union.txt')\n",
"documents = loader.load()"
]
},
{
"cell_type": "markdown",
"id": "e9be4779",
"metadata": {},
"source": [
"If you had multiple loaders that you wanted to combine, you do something like:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "433363a5",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"# loaders = [....]\n",
"# docs = []\n",
"# for loader in loaders:\n",
"# docs.extend(loader.load())"
]
},
{
"cell_type": "markdown",
"id": "239475d2",
"metadata": {},
"source": [
"We now split the documents, create embeddings for them, and put them in a vectorstore. This allows us to do semantic search over them."
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "a8930cf7",
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Running Chroma using direct local API.\n",
"Using DuckDB in-memory for database. Data will be transient.\n"
]
}
],
"source": [
"text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\n",
"documents = text_splitter.split_documents(documents)\n",
"\n",
"embeddings = OpenAIEmbeddings()\n",
"vectorstore = Chroma.from_documents(documents, embeddings)"
]
},
{
"cell_type": "markdown",
"id": "3c96b118",
"metadata": {},
"source": [
"We now initialize the ChatVectorDBChain"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "7b4110f3",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"qa = ChatVectorDBChain.from_llm(OpenAI(temperature=0), vectorstore)"
]
},
{
"cell_type": "markdown",
"id": "3872432d",
"metadata": {},
"source": [
"Here's an example of asking a question with no chat history"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "7fe3e730",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"chat_history = []\n",
"query = \"What did the president say about Ketanji Brown Jackson\"\n",
"result = qa({\"question\": query, \"chat_history\": chat_history})"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "bfff9cc8",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"\" The president said that Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, and from a family of public school educators and police officers. He also said that she is a consensus builder and has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans.\""
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"result[\"answer\"]"
]
},
{
"cell_type": "markdown",
"id": "9e46edf7",
"metadata": {},
"source": [
"Here's an example of asking a question with some chat history"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "00b4cf00",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"chat_history = [(query, result[\"answer\"])]\n",
"query = \"Did he mention who she suceeded\"\n",
"result = qa({\"question\": query, \"chat_history\": chat_history})"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "f01828d1",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"' Justice Stephen Breyer'"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"result['answer']"
]
},
{
"cell_type": "markdown",
"id": "0eaadf0f",
"metadata": {},
"source": [
"## Return Source Documents\n",
"You can also easily return source documents from the ChatVectorDBChain. This is useful for when you want to inspect what documents were returned."
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "562769c6",
"metadata": {},
"outputs": [],
"source": [
"qa = ChatVectorDBChain.from_llm(OpenAI(temperature=0), vectorstore, return_source_documents=True)"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "ea478300",
"metadata": {},
"outputs": [],
"source": [
"chat_history = []\n",
"query = \"What did the president say about Ketanji Brown Jackson\"\n",
"result = qa({\"question\": query, \"chat_history\": chat_history})"
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "4cb75b4e",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Document(page_content='In state after state, new laws have been passed, not only to suppress the vote, but to subvert entire elections. \\n\\nWe cannot let this happen. \\n\\nTonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while youre at it, pass the Disclose Act so Americans can know who is funding our elections. \\n\\nTonight, Id like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \\n\\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \\n\\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nations top legal minds, who will continue Justice Breyers legacy of excellence.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0)"
]
},
"execution_count": 15,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"result['source_documents'][0]"
]
},
{
"cell_type": "markdown",
"source": [
"## Chat Vector DB with `search_distance`\n",
"If you are using a vector store that supports filtering by search distance, you can add a threshold value parameter."
],
"metadata": {
"collapsed": false
}
},
{
"cell_type": "code",
"execution_count": null,
"outputs": [],
"source": [
"vectordbkwargs = {\"search_distance\": 0.9}"
],
"metadata": {
"collapsed": false
}
},
{
"cell_type": "code",
"execution_count": null,
"outputs": [],
"source": [
"qa = ChatVectorDBChain.from_llm(OpenAI(temperature=0), vectorstore, return_source_documents=True)\n",
"chat_history = []\n",
"query = \"What did the president say about Ketanji Brown Jackson\"\n",
"result = qa({\"question\": query, \"chat_history\": chat_history, \"vectordbkwargs\": vectordbkwargs})"
],
"metadata": {
"collapsed": false
}
},
{
"cell_type": "markdown",
"source": [
"## Chat Vector DB with `map_reduce`\n",
"We can also use different types of combine document chains with the Chat Vector DB chain."
],
"metadata": {
"collapsed": false
}
},
{
"cell_type": "code",
"execution_count": null,
"id": "e53a9d66",
"metadata": {},
"outputs": [],
"source": [
"from langchain.chains import LLMChain\n",
"from langchain.chains.question_answering import load_qa_chain\n",
"from langchain.chains.chat_vector_db.prompts import CONDENSE_QUESTION_PROMPT"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "bf205e35",
"metadata": {},
"outputs": [],
"source": [
"llm = OpenAI(temperature=0)\n",
"question_generator = LLMChain(llm=llm, prompt=CONDENSE_QUESTION_PROMPT)\n",
"doc_chain = load_qa_chain(llm, chain_type=\"map_reduce\")\n",
"\n",
"chain = ChatVectorDBChain(\n",
" vectorstore=vectorstore,\n",
" question_generator=question_generator,\n",
" combine_docs_chain=doc_chain,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "78155887",
"metadata": {},
"outputs": [],
"source": [
"chat_history = []\n",
"query = \"What did the president say about Ketanji Brown Jackson\"\n",
"result = chain({\"question\": query, \"chat_history\": chat_history})"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "e54b5fa2",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"\" The president said that Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, from a family of public school educators and police officers, a consensus builder, and has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans.\""
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"result['answer']"
]
},
{
"cell_type": "markdown",
"id": "a2fe6b14",
"metadata": {},
"source": [
"## Chat Vector DB with Question Answering with sources\n",
"\n",
"You can also use this chain with the question answering with sources chain."
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "d1058fd2",
"metadata": {},
"outputs": [],
"source": [
"from langchain.chains.qa_with_sources import load_qa_with_sources_chain"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "a6594482",
"metadata": {},
"outputs": [],
"source": [
"llm = OpenAI(temperature=0)\n",
"question_generator = LLMChain(llm=llm, prompt=CONDENSE_QUESTION_PROMPT)\n",
"doc_chain = load_qa_with_sources_chain(llm, chain_type=\"map_reduce\")\n",
"\n",
"chain = ChatVectorDBChain(\n",
" vectorstore=vectorstore,\n",
" question_generator=question_generator,\n",
" combine_docs_chain=doc_chain,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "e2badd21",
"metadata": {},
"outputs": [],
"source": [
"chat_history = []\n",
"query = \"What did the president say about Ketanji Brown Jackson\"\n",
"result = chain({\"question\": query, \"chat_history\": chat_history})"
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "edb31fe5",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"\" The president said that Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, from a family of public school educators and police officers, a consensus builder, and has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \\nSOURCES: ../../state_of_the_union.txt\""
]
},
"execution_count": 16,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"result['answer']"
]
},
{
"cell_type": "markdown",
"id": "2324cdc6-98bf-4708-b8cd-02a98b1e5b67",
"metadata": {},
"source": [
"## Chat Vector DB with streaming to `stdout`\n",
"\n",
"Output from the chain will be streamed to `stdout` token by token in this example."
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "2efacec3-2690-4b05-8de3-a32fd2ac3911",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain.chains.llm import LLMChain\n",
"from langchain.callbacks.base import CallbackManager\n",
"from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler\n",
"from langchain.chains.chat_vector_db.prompts import CONDENSE_QUESTION_PROMPT, QA_PROMPT\n",
"from langchain.chains.question_answering import load_qa_chain\n",
"\n",
"# Construct a ChatVectorDBChain with a streaming llm for combine docs\n",
"# and a separate, non-streaming llm for question generation\n",
"llm = OpenAI(temperature=0)\n",
"streaming_llm = OpenAI(streaming=True, callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]), verbose=True, temperature=0)\n",
"\n",
"question_generator = LLMChain(llm=llm, prompt=CONDENSE_QUESTION_PROMPT)\n",
"doc_chain = load_qa_chain(streaming_llm, chain_type=\"stuff\", prompt=QA_PROMPT)\n",
"\n",
"qa = ChatVectorDBChain(vectorstore=vectorstore, combine_docs_chain=doc_chain, question_generator=question_generator)"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "fd6d43f4-7428-44a4-81bc-26fe88a98762",
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
" The president said that Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, and from a family of public school educators and police officers. He also said that she is a consensus builder and has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans."
]
}
],
"source": [
"chat_history = []\n",
"query = \"What did the president say about Ketanji Brown Jackson\"\n",
"result = qa({\"question\": query, \"chat_history\": chat_history})"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "5ab38978-f3e8-4fa7-808c-c79dec48379a",
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
" Justice Stephen Breyer"
]
}
],
"source": [
"chat_history = [(query, result[\"answer\"])]\n",
"query = \"Did he mention who she suceeded\"\n",
"result = qa({\"question\": query, \"chat_history\": chat_history})\n"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@ -365,6 +365,20 @@
"chain({\"input_documents\": docs, \"question\": query}, return_only_outputs=True)"
]
},
{
"cell_type": "markdown",
"id": "d943c6c1",
"metadata": {},
"source": [
"**Batch Size**\n",
"\n",
"When using the `map_reduce` chain, one thing to keep in mind is the batch size you are using during the map step. If this is too high, it could cause rate limiting errors. You can control this by setting the batch size on the LLM used. Note that this only applies for LLMs with this parameter. Below is an example of doing so:\n",
"\n",
"```python\n",
"llm = OpenAI(batch_size=5, temperature=0)\n",
"```"
]
},
{
"cell_type": "markdown",
"id": "5bf0e1ab",

View File

@ -7,7 +7,7 @@
"source": [
"# Question Answering\n",
"\n",
"This notebook walks through how to use LangChain for question answering over a list of documents. It covers four different types of chaings: `stuff`, `map_reduce`, `refine`, `map-rerank`. For a more in depth explanation of what these chain types are, see [here](../combine_docs.md)."
"This notebook walks through how to use LangChain for question answering over a list of documents. It covers four different types of chains: `stuff`, `map_reduce`, `refine`, `map-rerank`. For a more in depth explanation of what these chain types are, see [here](../combine_docs.md)."
]
},
{
@ -30,29 +30,24 @@
"from langchain.text_splitter import CharacterTextSplitter\n",
"from langchain.vectorstores import Chroma\n",
"from langchain.docstore.document import Document\n",
"from langchain.prompts import PromptTemplate"
"from langchain.prompts import PromptTemplate\n",
"from langchain.indexes.vectorstore import VectorstoreIndexCreator"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "291f0117",
"id": "ef9305cc",
"metadata": {},
"outputs": [],
"source": [
"from langchain.document_loaders import TextLoader\n",
"loader = TextLoader('../../state_of_the_union.txt')\n",
"documents = loader.load()\n",
"text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\n",
"texts = text_splitter.split_documents(documents)\n",
"\n",
"embeddings = OpenAIEmbeddings()"
"index_creator = VectorstoreIndexCreator()"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "fd9666a9",
"execution_count": 3,
"id": "291f0117",
"metadata": {},
"outputs": [
{
@ -65,12 +60,14 @@
}
],
"source": [
"docsearch = Chroma.from_documents(texts, embeddings)"
"from langchain.document_loaders import TextLoader\n",
"loader = TextLoader('../../state_of_the_union.txt')\n",
"docsearch = index_creator.from_loaders([loader])"
]
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 4,
"id": "d1eaf6e6",
"metadata": {},
"outputs": [],
@ -356,6 +353,20 @@
"chain({\"input_documents\": docs, \"question\": query}, return_only_outputs=True)"
]
},
{
"cell_type": "markdown",
"id": "6391b7ab",
"metadata": {},
"source": [
"**Batch Size**\n",
"\n",
"When using the `map_reduce` chain, one thing to keep in mind is the batch size you are using during the map step. If this is too high, it could cause rate limiting errors. You can control this by setting the batch size on the LLM used. Note that this only applies for LLMs with this parameter. Below is an example of doing so:\n",
"\n",
"```python\n",
"llm = OpenAI(batch_size=5, temperature=0)\n",
"```"
]
},
{
"cell_type": "markdown",
"id": "6ea50ad0",

View File

@ -68,7 +68,7 @@
{
"data": {
"text/plain": [
"\" The president said that Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, and from a family of public school educators and police officers. He also said that she is a consensus builder, and has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans.\""
"\" The president said that Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice and federal public defender, from a family of public school educators and police officers, a consensus builder, and has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans.\""
]
},
"execution_count": 4,
@ -166,6 +166,67 @@
"qa.run(query)"
]
},
{
"cell_type": "markdown",
"id": "90c7899a",
"metadata": {},
"source": [
"## Custom Prompts\n",
"You can pass in custom prompts to do question answering. These prompts are the same prompts as you can pass into the [base question answering chain](./question_answering.ipynb)"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "a45232a2",
"metadata": {},
"outputs": [],
"source": [
"from langchain.prompts import PromptTemplate\n",
"prompt_template = \"\"\"Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer.\n",
"\n",
"{context}\n",
"\n",
"Question: {question}\n",
"Answer in Italian:\"\"\"\n",
"PROMPT = PromptTemplate(\n",
" template=prompt_template, input_variables=[\"context\", \"question\"]\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "9b5c8d1d",
"metadata": {},
"outputs": [],
"source": [
"chain_type_kwargs = {\"prompt\": PROMPT}\n",
"qa = VectorDBQA.from_chain_type(llm=OpenAI(), chain_type=\"stuff\", vectorstore=docsearch, chain_type_kwargs=chain_type_kwargs)"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "26ee7671",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"\" Il Presidente ha detto che Ketanji Brown Jackson è uno dei pensatori legali più importanti del nostro Paese, che continuerà l'eccellente eredità di giustizia Breyer. È un ex principale litigante in pratica privata, un ex difensore federale pubblico e appartiene a una famiglia di insegnanti e poliziotti delle scuole pubbliche. È un costruttore di consenso che ha ricevuto un ampio supporto da parte di Fraternal Order of Police e giudici designati da democratici e repubblicani.\""
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"query = \"What did the president say about Ketanji Brown Jackson\"\n",
"qa.run(query)"
]
},
{
"cell_type": "markdown",
"id": "0b8c37f7",

View File

@ -163,7 +163,7 @@
"source": [
"from langchain.chains.qa_with_sources import load_qa_with_sources_chain\n",
"qa_chain = load_qa_with_sources_chain(OpenAI(temperature=0), chain_type=\"stuff\")\n",
"qa = VectorDBQAWithSourcesChain(combine_document_chain=qa_chain, vectorstore=docsearch)"
"qa = VectorDBQAWithSourcesChain(combine_documents_chain=qa_chain, vectorstore=docsearch)"
]
},
{

View File

@ -313,13 +313,156 @@
"query_result = embeddings.embed_query(text)"
]
},
{
"cell_type": "markdown",
"id": "eec4efda",
"metadata": {},
"source": [
"## Self Hosted Embeddings\n",
"Let's load the SelfHostedEmbeddings, SelfHostedHuggingFaceEmbeddings, and SelfHostedHuggingFaceInstructEmbeddings classes."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a961cdb5",
"id": "d338722a",
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"from langchain.embeddings import (\n",
" SelfHostedEmbeddings, \n",
" SelfHostedHuggingFaceEmbeddings, \n",
" SelfHostedHuggingFaceInstructEmbeddings\n",
")\n",
"import runhouse as rh"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "146559e8",
"metadata": {},
"outputs": [],
"source": []
"source": [
"# For an on-demand A100 with GCP, Azure, or Lambda\n",
"gpu = rh.cluster(name=\"rh-a10x\", instance_type=\"A100:1\", use_spot=False)\n",
"\n",
"# For an on-demand A10G with AWS (no single A100s on AWS)\n",
"# gpu = rh.cluster(name='rh-a10x', instance_type='g5.2xlarge', provider='aws')\n",
"\n",
"# For an existing cluster\n",
"# gpu = rh.cluster(ips=['<ip of the cluster>'], \n",
"# ssh_creds={'ssh_user': '...', 'ssh_private_key':'<path_to_key>'},\n",
"# name='my-cluster')"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1230f7df",
"metadata": {},
"outputs": [],
"source": [
"embeddings = SelfHostedHuggingFaceEmbeddings(hardware=gpu)"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "2684e928",
"metadata": {},
"outputs": [],
"source": [
"text = \"This is a test document.\""
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1dc5e606",
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"query_result = embeddings.embed_query(text)"
]
},
{
"cell_type": "markdown",
"id": "cef9cc54",
"metadata": {},
"source": [
"And similarly for SelfHostedHuggingFaceInstructEmbeddings:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "81a17ca3",
"metadata": {},
"outputs": [],
"source": [
"embeddings = SelfHostedHuggingFaceInstructEmbeddings(hardware=gpu)"
]
},
{
"cell_type": "markdown",
"id": "5a33d1c8",
"metadata": {},
"source": [
"Now let's load an embedding model with a custom load function:"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "c4af5679",
"metadata": {},
"outputs": [],
"source": [
"def get_pipeline():\n",
" from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline # Must be inside the function in notebooks\n",
" model_id = \"facebook/bart-base\"\n",
" tokenizer = AutoTokenizer.from_pretrained(model_id)\n",
" model = AutoModelForCausalLM.from_pretrained(model_id)\n",
" return pipeline(\"feature-extraction\", model=model, tokenizer=tokenizer)\n",
"\n",
"def inference_fn(pipeline, prompt):\n",
" # Return last hidden state of the model\n",
" if isinstance(prompt, list):\n",
" return [emb[0][-1] for emb in pipeline(prompt)] \n",
" return pipeline(prompt)[0][-1]"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "8654334b",
"metadata": {},
"outputs": [],
"source": [
"embeddings = SelfHostedEmbeddings(\n",
" model_load_fn=get_pipeline, \n",
" hardware=gpu,\n",
" model_reqs=[\"./\", \"torch\", \"transformers\"],\n",
" inference_fn=inference_fn\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "fc1bfd0f",
"metadata": {
"scrolled": false
},
"outputs": [],
"source": [
"query_result = embeddings.embed_query(text)"
]
}
],
"metadata": {
@ -338,7 +481,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
"version": "3.10.9"
},
"vscode": {
"interpreter": {

Some files were not shown because too many files have changed in this diff Show More