Commit Graph

491 Commits (6c13003dd3af0aa7f01a68d34a1dc2ab8adc3283)

Author SHA1 Message Date
Harrison Chase f74a1bebf5
Harrison/duckdb (#2064)
Co-authored-by: Trent Hauck <trent@trenthauck.com>
2 years ago
Harrison Chase 76ecca4d53
redis retriever (#2060) 2 years ago
Ankush Gola b7ebb8fe30
enable streaming in anthropic llm wrapper (#2065) 2 years ago
Harrison Chase 30e3b31b04
Harrison/document cleanup (#2062)
Co-authored-by: Delip Rao <delip@users.noreply.github.com>
2 years ago
Harrison Chase a0cd6672aa
Harrison/site map (#2061)
Co-authored-by: Tim Asp <707699+timothyasp@users.noreply.github.com>
2 years ago
Krulknul 5e91928607
Added `.as_retriever()` to `from_llm()` calls (#2051) 2 years ago
Jason Holtkamp 3d3e523520
Update getting_started with better example (#1910)
I noticed that the "getting started" guide section on agents included an
example test where the agent was getting the question wrong 😅

I guess Olivia Wilde's dating life is too tough to keep track of for
this simple agent example. Let's change it to something a little easier,
so users who are running their agent for the first time are less likely
to be confused by a result that doesn't match that which is on the docs.
2 years ago
Eduard van Valkenburg c1a9d83b34
Added Azure Blob Storage File and Container Loader (#1890)
Added support for document loaders for Azure Blob Storage using a
connection string. Fixes #1805

---------

Co-authored-by: Mick Vleeshouwer <mick@imick.nl>
2 years ago
Harrison Chase b26fa1935d
fix headers (#2039) 2 years ago
Harrison Chase bc2ed93b77
fix doc tags (#2019) 2 years ago
Ankush Gola c71f2a7b26
small nit on index page (#2018) 2 years ago
Harrison Chase 51681f653f
fix docs (#2017) 2 years ago
Harrison Chase 705431aecc
big docs refactor (#1978)
Co-authored-by: Ankush Gola <ankush.gola@gmail.com>
2 years ago
Harrison Chase b83e826510
plugin tool (#1974) 2 years ago
Harrison Chase 6ec5780547
add docs for openai retriever ingest (#1969) 2 years ago
Harrison Chase 47d37db2d2
WIP: Harrison/base retriever (#1765) 2 years ago
Enwei Jiao 4f364db9a9
Add milvus for ecosystem (#1951) 2 years ago
Tim Asp 030ce9f506
fix import error of bs4 (#1952)
Ran into a broken build if bs4 wasn't installed in the project.

Minor tweak to follow the other doc loaders optional package-loading
conventions.

Also updated html docs to include reference to this new html loader.

side note: Should there be 2 different html-to-text document loaders?
This new one only handles local files, while the existing unstructured
html loader handles HTML from local and remote. So it seems like the
improvement was adding the title to the metadata, which is useful but
could also be added to `html.py`
2 years ago
Harrison Chase 8990122d5d
retrievers interface (#1948) 2 years ago
Harrison Chase 52d6bf04d0
tracing improvements to docs (#1947) 2 years ago
Harrison Chase b5667bed9e
human input default (#1911) 2 years ago
Eric Zhu b3be83c750
Add human as a tool (#1879)
Human can help AI.  #1871
2 years ago
Harrison Chase 50626a10ee
Hx23840 feat/add redisearch vectorstore (#1909)
Co-authored-by: Peter <peter.shi@alephf.com>
Co-authored-by: Peter Shi <42536066+hx23840@users.noreply.github.com>
2 years ago
Harrison Chase 6e1b5b8f7e
Harrison/figma doc loader (#1908)
Co-authored-by: Ismail Pelaseyed <homanp@gmail.com>
2 years ago
Klein Tahiraj d3d4503ce2
Remove redundant .docx loader (closes #1716) + update how_to_guides.rst (#1891)
In https://github.com/hwchase17/langchain/issues/1716 , it was
identified that there were two .py files performing similar tasks. As a
resolution, one of the files has been removed, as its purpose had
already been fulfilled by the other file. Additionally, the init has
been updated accordingly.

Furthermore, the how_to_guides.rst file has been updated to include
links to documentation that was previously missing. This was deemed
necessary as the existing list on
https://langchain.readthedocs.io/en/latest/modules/document_loaders/how_to_guides.html
was incomplete, causing confusion for users who rely on the full list of
documentation on the left sidebar of the website.
2 years ago
Harrison Chase 1f93c5cf69
extraction docs (#1898) 2 years ago
Sean Zheng 15b5a08f4b
Update how_to_guides.rst (#1893)
Adding OpenSearch examples
2 years ago
Harrison Chase ce5d97bcb3
Harrison/guarded output parser (#1804)
Co-authored-by: jerwelborn <jeremy.welborn@gmail.com>
2 years ago
DeadBranch 8fa1764c60
docs: update gpt index references to LlamaIndex (#1856)
The GPT Index project is transitioning to the new project name,
LlamaIndex.

I've updated a few files referencing the old project name and repository
URL to the current ones.

From the [LlamaIndex repo](https://github.com/jerryjliu/llama_index):
> NOTE: We are rebranding GPT Index as LlamaIndex! We will carry out
this transition gradually.
>
> 2/25/2023: By default, our docs/notebooks/instructions now reference
"LlamaIndex" instead of "GPT Index".
>
> 2/19/2023: By default, our docs/notebooks/instructions now use the
llama-index package. However the gpt-index package still exists as a
duplicate!
>
> 2/16/2023: We have a duplicate llama-index pip package. Simply replace
all imports of gpt_index with llama_index if you choose to pip install
llama-index.

I'm not associated with LlamaIndex in any way. I just noticed the
discrepancy when studying the lanchain documentation.
2 years ago
Harrison Chase f299bd1416
clean up sagemaker nb (#1875) 2 years ago
Philipp Schmid 064be93edf
[Embeddings] Add SageMaker Endpoint Embedding class (#1859)
# What does this PR do? 

This PR adds similar to `llms` a SageMaker-powered `embeddings` class.
This is helpful if you want to leverage Hugging Face models on SageMaker
for creating your indexes.

I added a example into the
[docs/modules/indexes/examples/embeddings.ipynb](https://github.com/hwchase17/langchain/compare/master...philschmid:add-sm-embeddings?expand=1#diff-e82629e2894974ec87856aedd769d4bdfe400314b03734f32bee5990bc7e8062)
document. The example currently includes some `_### TEMPORARY: Showing
how to deploy a SageMaker Endpoint from a Hugging Face model ###_ ` code
showing how you can deploy a sentence-transformers to SageMaker and then
run the methods of the embeddings class.

@hwchase17 please let me know if/when i should remove the `_###
TEMPORARY: Showing how to deploy a SageMaker Endpoint from a Hugging
Face model ###_` in the description i linked to a detail blog on how to
deploy a Sentence Transformers so i think we don't need to include those
steps here.

I also reused the `ContentHandlerBase` from
`langchain.llms.sagemaker_endpoint` and changed the output type to `any`
since it is depending on the implementation.
2 years ago
anupam-tiwari 86822d1cc2
Fixes the import typo in the vector db text generator notebook (#1874)
Fixes the import typo in the vector db text generator notebook for the
chroma library

Co-authored-by: Anupam <anupam@10-16-252-145.dynapool.wireless.nyu.edu>
2 years ago
Harrison Chase a581bce379
remove key (#1863) 2 years ago
Harrison Chase 2ffc643086
add listen api docs (#1855) 2 years ago
Tomoko Uchida b706966ebc
Add setup instruction in Getting Started for Indexing (#1847)
`VectorstoreIndexCreator` [uses Chroma as the vectorstore by
default](1c22657256/langchain/indexes/vectorstore.py (L49)).
It may be helpful to add a short note for the setup.

You can see how the notebook looks here.

https://github.com/mocobeta/langchain/blob/feat/add-setup-instruction-to-index-getting-started/docs/modules/indexes/getting_started.ipynb
2 years ago
Harrison Chase 1c22657256
Harrison/faiss merge (#1843)
Co-authored-by: Ting Su <ting.su.1995@outlook.com>
2 years ago
Simon Zhou 3674074eb0
Add Qdrant to ecosystem page (#1830)
Add [Qdrant](https://qdrant.tech/) to [LangChain
ecosystem](https://langchain.readthedocs.io/en/latest/ecosystem.html)
page.
2 years ago
Wenbin Fang a7e09d46c5
Add podcast api tool to use NLP to search all podcasts or episodes. (#1833)
Use the following code to test:

```python
import os
from langchain.llms import OpenAI
from langchain.chains.api import podcast_docs
from langchain.chains import APIChain

# Get api key here: https://openai.com/pricing
os.environ["OPENAI_API_KEY"] = "sk-xxxxx"

# Get api key here: https://www.listennotes.com/api/pricing/
listen_api_key = 'xxx'

llm = OpenAI(temperature=0)
headers = {"X-ListenAPI-Key": listen_api_key}
chain = APIChain.from_llm_and_api_docs(llm, podcast_docs.PODCAST_DOCS, headers=headers, verbose=True)
chain.run("Search for 'silicon valley bank' podcast episodes, audio length is more than 30 minutes, return only 1 results")
```

Known issues: the api response data might be too big, and we'll get such
error:
`openai.error.InvalidRequestError: This model's maximum context length
is 4097 tokens, however you requested 6733 tokens (6477 in your prompt;
256 for the completion). Please reduce your prompt; or completion
length.`
2 years ago
Ikko Eltociear Ashimine 9555bbd5bb
Fix typo in sqlite.ipynb (#1828)
overriden -> overridden
2 years ago
Harrison Chase d5b4393bb2
Harrison/llm math (#1808)
Co-authored-by: Vadym Barda <vadim.barda@gmail.com>
2 years ago
Bryan Helmig 7b6ff7fe00
Follow up to #1803 to remove dynamic docs route. (#1818)
The base docs are going to be more stable and familiar for folks.
Dynamic route is currently in flux.
2 years ago
Harrison Chase 76c7b1f677
Harrison/wandb (#1764)
Co-authored-by: Anish Shah <93145909+ash0ts@users.noreply.github.com>
2 years ago
Harrison Chase d5d50c39e6
Harrison/azure embeddings (#1787)
Co-authored-by: Hemant <4627288+ghaccount@users.noreply.github.com>
2 years ago
Harrison Chase 1f18698b2a
Harrison/token buffer memory (#1786)
Co-authored-by: Aratako <127325395+Aratako@users.noreply.github.com>
2 years ago
Harrison Chase ef4945af6b
Harrison/chat token usage (#1785) 2 years ago
Harrison Chase 7de2ada3ea
Harrison/add source column (#1784)
Co-authored-by: Brian Graham <46691715+briangrahamww@users.noreply.github.com>
Co-authored-by: briangrahamww <brian.graham@ww.com>
2 years ago
hitoshi44 3cf493b089
Fix Document & Expose StringPromptTemplate as a custom-prompt-template. (#1753)
Regarding [this
issue](https://github.com/hwchase17/langchain/issues/1754), the code in
the document [Creating a custom prompt
template](https://langchain.readthedocs.io/en/latest/modules/prompts/examples/custom_prompt_template.html)
is no longer functional and outdated.

To address this, I have made the following changes:

1. Updated the guide in the document to use `StringPromptTemplate`
instead of `BasePromptTemplate`.
2. Exposed `StringPromptTemplate` in `prompts/__init__.py` for easier
importing.
2 years ago
hung_ng__ 3d6fcb85dc
Add load json prompt example (#1776)
Hi, I just want to add a PR on the prompt serialization examples of
loading from JSON so that it can contain the same as loading from YAML.
2 years ago
Piyush Jain 1a8790d808
Corrects copyright year (#1762)
Corrected copyright year.
2 years ago
Harrison Chase 8685d53adc
querying tabular data (#1758) 2 years ago
Harrison Chase dd90fd02d5
Harrison/move docs (#1741) 2 years ago
Harrison Chase 07766a69f3
move docs (#1740) 2 years ago
Harrison Chase 96ebe98dc2
Harrison/latex splitter (#1738)
Co-authored-by: Aidan Holland <thehappydinoa@gmail.com>
Co-authored-by: Jan de Boer <44832123+Janldeboer@users.noreply.github.com>
2 years ago
Harrison Chase 45f05fc939
Harrison/blackboard loader (#1737)
Co-authored-by: Aidan Holland <thehappydinoa@gmail.com>
2 years ago
Vincent Liao cf9c3f54f7
docs: add docs link to agent toolkits (#1735)
New to Langchain, was a bit confused where I should find the toolkits
section when I'm at `agent/key_concepts` docs. I added a short link that
points to the how to section.
2 years ago
Piyush Jain cdff6c8181
Sagemaker Endpoint LLM (#1686)
Updates #965

---------

Co-authored-by: Nimisha Mehta <116048415+nimimeht@users.noreply.github.com>
Co-authored-by: Harrison Chase <harrisonchase@Harrisons-MBP.attlocal.net>
2 years ago
libra 8a95fdaee1
Fix all the bug in init Tool in docs (#1725)
Fix all the example in the docs when init `Tool`

Test by render with jupyter
2 years ago
jerwelborn 55efbb8a7e
pydantic/json parsing (#1722)
```
class Joke(BaseModel):
    setup: str = Field(description="question to set up a joke")
    punchline: str = Field(description="answer to resolve the joke")

joke_query = "Tell me a joke."

# Or, an example with compound type fields.
#class FloatArray(BaseModel):
#    values: List[float] = Field(description="list of floats")
#
#float_array_query = "Write out a few terms of fiboacci."

model = OpenAI(model_name='text-davinci-003', temperature=0.0)
parser = PydanticOutputParser(pydantic_object=Joke)
prompt = PromptTemplate(
    template="Answer the user query.\n{format_instructions}\n{query}\n",
    input_variables=["query"],
    partial_variables={"format_instructions": parser.get_format_instructions()}
)

_input = prompt.format_prompt(query=joke_query)
print("Prompt:\n", _input.to_string())
output = model(_input.to_string())
print("Completion:\n", output)
parsed_output = parser.parse(output)
print("Parsed completion:\n", parsed_output)
```

```
Prompt:
 Answer the user query.
The output should be formatted as a JSON instance that conforms to the JSON schema below.  For example, the object {"foo":  ["bar", "baz"]} conforms to the schema {"foo": {"description": "a list of strings field", "type": "string"}}.

Here is the output schema:
---
{"setup": {"description": "question to set up a joke", "type": "string"}, "punchline": {"description": "answer to resolve the joke", "type": "string"}}
---

Tell me a joke.

Completion:
 {"setup": "Why don't scientists trust atoms?", "punchline": "Because they make up everything!"}

Parsed completion:
 setup="Why don't scientists trust atoms?" punchline='Because they make up everything!'
```

Ofc, works only with LMs of sufficient capacity. DaVinci is reliable but
not always.

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2 years ago
Jonathan Pedoeem 606605925d
Adding ability to `return_pl_id` to all PromptLayer Models in LangChain (#1699)
PromptLayer now has support for [several different tracking
features.](https://magniv.notion.site/Track-4deee1b1f7a34c1680d085f82567dab9)
In order to use any of these features you need to have a request id
associated with the request.

In this PR we add a boolean argument called `return_pl_id` which will
add `pl_request_id` to the `generation_info` dictionary associated with
a generation.

We also updated the relevant documentation.
2 years ago
Harrison Chase 3ea6d9c4d2
add docs for save/load messages (#1697) 2 years ago
Piyush Jain 1279c8de39
Fixed typo, clarified language (#1682) 2 years ago
at-b612 c7779c800a
Added Mynd URL to gallery (#1684) 2 years ago
Jithin James 6f4f771897
docs: add path to state_of_the_union.txt in indexes/getting_started page (#1691)
add the state_of_the_union.txt file so that its easier to follow through
with the example.

---------

Co-authored-by: Jithin James <jjmachan@pop-os.localdomain>
2 years ago
Ankush Gola d4edd3c312
Zapier Integration (#1654)
* Zapier Wrapper and Tools (implemented by Zapier Team)
* Zapier Toolkit, examples with mrkl agent

---------

Co-authored-by: Mike Knoop <mikeknoop@gmail.com>
Co-authored-by: Robert Lewis <robert.lewis@zapier.com>
2 years ago
Harrison Chase 0b29e68c17
Harrison/pgvector (#1679)
Co-authored-by: Aman Kumar <krsingh.aman@gmail.com>
2 years ago
Harrison Chase 4d7fdb8957
Harrison/gml save (#1676)
Co-authored-by: Satoru Sakamoto <51464932+satoru814@users.noreply.github.com>
2 years ago
Harrison Chase 656efe6ef3
Harrison/fix nb (#1678) 2 years ago
Matt Robinson 63aa28e2a6
feat: allow the unstructured kwargs to be passed in to Unstructured document loaders (#1667)
### Summary

Allows users to pass in `**unstructured_kwargs` to Unstructured document
loaders. Implemented with the `strategy` kwargs in mind, but will pass
in other kwargs like `include_page_breaks` as well. The two currently
supported strategies are `"hi_res"`, which is more accurate but takes
longer, and `"fast"`, which processes faster but with lower accuracy.
The `"hi_res"` strategy is the default. For PDFs, if `detectron2` is not
available and the user selects `"hi_res"`, the loader will fallback to
using the `"fast"` strategy.


### Testing

#### Make sure the `strategy` kwarg works

Run the following in iPython to verify that the `"fast"` strategy is
indeed faster.

```python
from langchain.document_loaders import UnstructuredFileLoader

loader = UnstructuredFileLoader("layout-parser-paper-fast.pdf", strategy="fast", mode="elements")
%timeit loader.load()

loader = UnstructuredFileLoader("layout-parser-paper-fast.pdf", mode="elements")
%timeit loader.load()
```

On my system I get:

```python
In [3]: from langchain.document_loaders import UnstructuredFileLoader

In [4]: loader = UnstructuredFileLoader("layout-parser-paper-fast.pdf", strategy="fast", mode="elements")

In [5]: %timeit loader.load()
247 ms ± 369 µs per loop (mean ± std. dev. of 7 runs, 1 loop each)

In [6]: loader = UnstructuredFileLoader("layout-parser-paper-fast.pdf", mode="elements")

In [7]: %timeit loader.load()
2.45 s ± 31 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
```

#### Make sure older versions of `unstructured` still work

Run `pip install unstructured==0.5.3` and then verify the following runs
without error:

```python
from langchain.document_loaders import UnstructuredFileLoader

loader = UnstructuredFileLoader("layout-parser-paper-fast.pdf",  mode="elements")
loader.load()
```
2 years ago
Matthias Kern c3dfbdf0da
Remove outdated code from Chat VectorDB QA example (#1670) 2 years ago
Bilel MEDIMEGH a2280f321f
Docs: Fix typo in memory/key_concepts.md (#1671)
dialouge -> dialogue
2 years ago
Xin Qiu 4e13cef05a
feat: add redisearch vectorstore (#1307)
# Description

Add `RediSearch` vectorstore for LangChain

RediSearch: [RediSearch quick
start](https://redis.io/docs/stack/search/quick_start/)

# How to use

```
from langchain.vectorstores.redisearch import RediSearch

rds = RediSearch.from_documents(docs, embeddings,redisearch_url="redis://localhost:6379")
```
2 years ago
Harrison Chase 2d098e8869
Harrison/agent eval (#1620)
Co-authored-by: jerwelborn <jeremy.welborn@gmail.com>
2 years ago
Harrison Chase 7cf46b3fee
Harrison/convo agent (#1642) 2 years ago
Jon Luo 0a1b1806e9
sql: do not hard code the LIMIT clause in the table_info section (#1563)
Seeing a lot of issues in Discord in which the LLM is not using the
correct LIMIT clause for different SQL dialects. ie, it's using `LIMIT`
for mssql instead of `TOP`, or instead of `ROWNUM` for Oracle, etc.
I think this could be due to us specifying the LIMIT statement in the
example rows portion of `table_info`. So the LLM is seeing the `LIMIT`
statement used in the prompt.
Since we can't specify each dialect's method here, I think it's fine to
just replace the `SELECT... LIMIT 3;` statement with `3 rows from
table_name table:`, and wrap everything in a block comment directly
following the `CREATE` statement. The Rajkumar et al paper wrapped the
example rows and `SELECT` statement in a block comment as well anyway.
Thoughts @fpingham?
2 years ago
Tim Asp b3234bf3b0
cleanup: unify 3 different pdf loaders, rename PagedPDFSplitter (#1615)
`OnlinePDFLoader` and `PagedPDFSplitter` lived separate from the rest of
the pdf loaders.

Because they're all similar, I propose moving all to `pdy.py` and the
same docs/examples page.

Additionally, `PagedPDFSplitter` naming doesn't match the pattern the
rest of the loaders follow, so I renamed to `PyPDFLoader` and had it
inherit from `BasePDFLoader` so it can now load from remote file
sources.
2 years ago
Harrison Chase 56aff797c0
docs req (#1647) 2 years ago
Harrison Chase d53ff270e0
bump version to 109 (#1646) 2 years ago
Harrison Chase df6c33d4b3
Harrison/new output parser (#1617) 2 years ago
Eugene Yurtsev bd4a2a670b
Add copy button to sphinx notebooks (#1622)
This adds a copy button at the top right corner of all notebook cells in
sphinx
notebooks.
2 years ago
Ikko Eltociear Ashimine 6e98ab01e1
Fix typo in vectorstore.ipynb (#1614)
Initalize -> Initialize
2 years ago
yakigac acd86d33bc
Add read only shared memory (#1491)
Provide shared memory capability for the Agent.
Inspired by #1293 .

## Problem

If both Agent and Tools (i.e., LLMChain) use the same memory, both of
them will save the context. It can be annoying in some cases.


## Solution

Create a memory wrapper that ignores the save and clear, thereby
preventing updates from Agent or Tools.
2 years ago
Harrison Chase c9b5a30b37
move output parsing (#1605) 2 years ago
Harrison Chase 15de3e8137
Harrison/docs footer (#1600)
Co-authored-by: Albert Avetisian <albert.avetisian@gmail.com>
2 years ago
Harrison Chase 9f78717b3c
Harrison/callbacks (#1587) 2 years ago
Harrison Chase 90846dcc28
fix chat agent (#1586) 2 years ago
Zach Schillaci 624c72c266
Add wikipedia tool doc (#1579) 2 years ago
Tim Asp 30383abb12
Add CSVLoader document loader (#1573)
Simple CSV document loader which wraps `csv` reader, and preps the file
with a single `Document` per row.

The column header is prepended to each value for context which is useful
for context with embedding and semantic search
2 years ago
Andriy Mulyar c9189d354a
AtlasDB vector store documentation updates. (#1572)
- Updated errors in the AtlasDB vector store documentation
- Removed extraneous output logs in example notebook.
2 years ago
Matt Robinson 7018806a92
feat: document loader for markdown files (#1558)
### Summary

Adds a document loader for handling markdown files. This document loader
requires `unstructured>=0.4.16`.

### Testing

```python
from langchain.document_loaders import UnstructuredMarkdownLoader

loader = UnstructuredMarkdownLoader("README.md")
loader.load()
```
2 years ago
Harrison Chase bd335ffd64
bump version to 106 (#1562) 2 years ago
Harrison Chase a094c49153
add chat agent (#1509) 2 years ago
Brenton Wheeler 99fe023496
docs: fix typo in modules/indexes/chain_examples/question_answering (#1551)
docs: fix typo in modules/indexes/chain_examples/question_answering


![image](https://user-images.githubusercontent.com/11394076/224007874-3a52adf6-ff7a-4f22-9dbf-18c83d08167f.png)
2 years ago
Harrison Chase 3ee32a01ea
Harrison/prompt layer (#1547)
Co-authored-by: Jonathan Pedoeem <jonathanped@gmail.com>
Co-authored-by: AbuBakar <abubakarsohail123@gmail.com>
2 years ago
Harrison Chase cc423f40f1
Harrison/youtube loader (#1545)
Co-authored-by: Julian Wustl <57504258+Julianwustl@users.noreply.github.com>
2 years ago
Harrison Chase 523ad8d2e2
Harrison/chat history formatter1 (#1538)
Co-authored-by: Youssef A. Abukwaik <yousseb@users.noreply.github.com>
2 years ago
Graham Neubig 31303d0b11
Added other evaluation metrics for data-augmented QA (#1521)
This PR adds additional evaluation metrics for data-augmented QA,
resulting in a report like this at the end of the notebook:

![Screen Shot 2023-03-08 at 8 53 23
AM](https://user-images.githubusercontent.com/398875/223731199-8eb8e77f-5ff3-40a2-a23e-f3bede623344.png)

The score calculation is based on the
[Critique](https://docs.inspiredco.ai/critique/) toolkit, an API-based
toolkit (like OpenAI) that has minimal dependencies, so it should be
easy for people to run if they choose.

The code could further be simplified by actually adding a chain that
calls Critique directly, but that probably should be saved for another
PR if necessary. Any comments or change requests are welcome!
2 years ago
gidler 494c9d341a
[DOCS] Assorted wording, punctuation, and consistency revisions (#1443)
Contributing some small fixes I noticed while reading through the
documentation.

Thank you for a creating and maintaining this project!
2 years ago
Harrison Chase c4a557bdd4
add concept of prompt collection (#1507) 2 years ago
Ivan 97e3666e0d
changed requests.run to requests.get (#1485)
This pull request proposes an update to the Lightweight wrapper
library's documentation. The current documentation provides an example
of how to use the library's requests.run method, as follows:
requests.run("https://www.google.com"). However, this example does not
work for the 0.0.102 version of the library.

Testing:

The changes have been tested locally to ensure they are working as
intended.

Thank you for considering this pull request.
2 years ago
Tom Dyson e3354404ad
Fix link to Pinecone notebook (#1492) 2 years ago