Commit Graph

2371 Commits

Author SHA1 Message Date
Chandan Routray
642ae83d86
Removed deprecated llm attribute for load_chain (#5343)
# Removed deprecated llm attribute for load_chain

Currently `load_chain` for some chain types expect `llm` attribute to be
present but `llm` is deprecated attribute for those chains and might not
be persisted during their `chain.save`.

Fixes #5224
[(issue)](https://github.com/hwchase17/langchain/issues/5224)

## Who can review?
@hwchase17
@dev2049

---------

Co-authored-by: imeckr <chandanroutray2012@gmail.com>
2023-05-29 06:44:47 -07:00
Oleh Kuznetsov
f6615cac41
Update llamacpp demonstration notebook (#5344)
# Update llamacpp demonstration notebook

Add instructions to install with BLAS backend, and update the example of
model usage.

Fixes #5071. However, it is more like a prevention of similar issues in
the future, not a fix, since there was no problem in the framework
functionality

## Who can review?

Community members can review the PR once tests pass. Tag
maintainers/contributors who might be interested:

- @hwchase17 
- @agola11
2023-05-29 06:43:26 -07:00
Martin Holecek
44b48d9518
Fix update_document function, add test and documentation. (#5359)
# Fix for `update_document` Function in Chroma

## Summary
This pull request addresses an issue with the `update_document` function
in the Chroma class, as described in
[#5031](https://github.com/hwchase17/langchain/issues/5031#issuecomment-1562577947).
The issue was identified as an `AttributeError` raised when calling
`update_document` due to a missing corresponding method in the
`Collection` object. This fix refactors the `update_document` method in
`Chroma` to correctly interact with the `Collection` object.

## Changes
1. Fixed the `update_document` method in the `Chroma` class to correctly
call methods on the `Collection` object.
2. Added the corresponding test `test_chroma_update_document` in
`tests/integration_tests/vectorstores/test_chroma.py` to reflect the
updated method call.
3. Added an example and explanation of how to use the `update_document`
function in the Jupyter notebook tutorial for Chroma.

## Test Plan
All existing tests pass after this change. In addition, the
`test_chroma_update_document` test case now correctly checks the
functionality of `update_document`, ensuring that the function works as
expected and updates the content of documents correctly.

## Reviewers
@dev2049

This fix will ensure that users are able to use the `update_document`
function as expected, without encountering the previous
`AttributeError`. This will enhance the usability and reliability of the
Chroma class for all users.

Thank you for considering this pull request. I look forward to your
feedback and suggestions.
2023-05-29 06:39:25 -07:00
Louis Amaudruz
e455ba4ed5
Add async support to routing chains (#5373)
# Add async support for (LLM) routing chains

<!--
Thank you for contributing to LangChain! Your PR will appear in our
release under the title you set. Please make sure it highlights your
valuable contribution.

Replace this with a description of the change, the issue it fixes (if
applicable), and relevant context. List any dependencies required for
this change.

After you're done, someone will review your PR. They may suggest
improvements. If no one reviews your PR within a few days, feel free to
@-mention the same people again, as notifications can get lost.
-->

<!-- Remove if not applicable -->

Add asynchronous LLM calls support for the routing chains. More
specifically:
- Add async `aroute` function (i.e. async version of `route`) to the
`RouterChain` which calls the routing LLM asynchronously
- Implement the async `_acall` for the `LLMRouterChain`
- Implement the async `_acall` function for `MultiRouteChain` which
first calls asynchronously the routing chain with its new `aroute`
function, and then calls asynchronously the relevant destination chain.

<!-- If you're adding a new integration, please include:

1. a test for the integration - favor unit tests that does not rely on
network access.
2. an example notebook showing its use


See contribution guidelines for more information on how to write tests,
lint
etc:


https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->

## Who can review?

- @agola11

<!-- For a quicker response, figure out the right person to tag with @

  @hwchase17 - project lead
  Async
  - @agola11
        
 -->
2023-05-29 06:37:26 -07:00
Gael Grosch
8b7721ebbb
fix: Blob.from_data mimetype is lost (#5395)
# Fix lost mimetype when using Blob.from_data method

The mimetype is lost due to a typo in the class attribue name

Fixes # - (no issue opened but I can open one if needed)

## Changes

* Fixed typo in name
* Added unit-tests to validate the output Blob


## Review
@eyurtsev
2023-05-29 06:36:50 -07:00
Jacob Lee
f77f27163d
Update PR template with Twitter handle request (#5382)
# Updates PR template to request Twitter handle for shoutouts!

Makes it easier for maintainers to show their appreciation 😄
2023-05-29 06:23:17 -07:00
Zander Chase
14099f1b93
Use Default Factory (#5380)
We shouldn't be calling a constructor for a default value - should use
default_factory instead. This is especially ad in this case since it
requires an optional dependency and an API key to be set.
 
Resolves #5361
2023-05-29 06:22:35 -07:00
Harrison Chase
6df90ad9fd
handle json parsing errors (#5371)
adds tests cases, consolidates a lot of PRs
2023-05-29 06:18:19 -07:00
玄猫
99a1e3f3a3
Fix: Handle empty documents in ContextualCompressionRetriever (Issue #5304) (#5306)
# Fix: Handle empty documents in ContextualCompressionRetriever (Issue
#5304)

Fixes #5304 

Prevent cohere.error.CohereAPIError caused by an empty list of documents
by adding a condition to check if the input documents list is empty in
the compress_documents method. If the list is empty, return an empty
list immediately, avoiding the error and unnecessary processing.

@dev2049

---------

Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
2023-05-28 13:19:34 -07:00
os1ma
1366d070fc
Add path validation to DirectoryLoader (#5327)
# Add path validation to DirectoryLoader

This PR introduces a minor adjustment to the DirectoryLoader by adding
validation for the path argument. Previously, if the provided path
didn't exist or wasn't a directory, DirectoryLoader would return an
empty document list due to the behavior of the `glob` method. This could
potentially cause confusion for users, as they might expect a
file-loading error instead.

So, I've added two validations to the load method of the
DirectoryLoader:

- Raise a FileNotFoundError if the provided path does not exist
- Raise a ValueError if the provided path is not a directory

Due to the relatively small scope of these changes, a new issue was not
created.

## Before submitting

<!-- If you're adding a new integration, please include:

1. a test for the integration - favor unit tests that does not rely on
network access.
2. an example notebook showing its use


See contribution guidelines for more information on how to write tests,
lint
etc:


https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->

## Who can review?

Community members can review the PR once tests pass. Tag
maintainers/contributors who might be interested:

@eyurtsev
2023-05-28 15:31:23 -04:00
Harrison Chase
ad7f4c0317
bump to 183 (#5372) 2023-05-28 11:42:58 -07:00
Harrison Chase
b6927970f1
revert bad json (#5370) 2023-05-28 10:22:02 -07:00
Matt Wells
9a5c9df809
Fixes iter error in FAISS add_embeddings call (#5367)
# Remove re-use of iter within add_embeddings causing error

As reported in https://github.com/hwchase17/langchain/issues/5336 there
is an issue currently involving the atempted re-use of an iterator
within the FAISS vectorstore adapter

Fixes # https://github.com/hwchase17/langchain/issues/5336

## Who can review?

Community members can review the PR once tests pass. Tag
maintainers/contributors who might be interested:

  VectorStores / Retrievers / Memory
  - @dev2049
2023-05-28 09:59:30 -07:00
Davis Chase
b705f260f4
bump 182 (#5364) 2023-05-28 09:16:18 -07:00
Janos Tolgyesi
5f4552391f
Add SKLearnVectorStore (#5305)
# Add SKLearnVectorStore

This PR adds SKLearnVectorStore, a simply vector store based on
NearestNeighbors implementations in the scikit-learn package. This
provides a simple drop-in vector store implementation with minimal
dependencies (scikit-learn is typically installed in a data scientist /
ml engineer environment). The vector store can be persisted and loaded
from json, bson and parquet format.

SKLearnVectorStore has soft (dynamic) dependency on the scikit-learn,
numpy and pandas packages. Persisting to bson requires the bson package,
persisting to parquet requires the pyarrow package.

## Before submitting

Integration tests are provided under
`tests/integration_tests/vectorstores/test_sklearn.py`

Sample usage notebook is provided under
`docs/modules/indexes/vectorstores/examples/sklear.ipynb`

Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
2023-05-28 08:17:42 -07:00
Aymen Furter
e2742953a6
feat: support for shopping search in SerpApi (#5259)
# Support for shopping search in SerpApi

## Who can review?
@vowelparrot
2023-05-27 21:20:24 -07:00
Eduard van Valkenburg
1daa7068b2
added cosmos kwargs option (#5292)
# Added the ability to pass kwargs to cosmos client constructor

The cosmos client has a ton of options that can be set, so allowing
those to be passed to the constructor from the chat memory constructor
with this PR.
2023-05-27 21:19:40 -07:00
Kenton
881dfe8179
Sample Notebook for DynamoDB Chat Message History (#5351)
# Sample Notebook for DynamoDB Chat Message History

@dev2049

Adding a sample notebook for the DynamoDB Chat Message History class.

<!-- For a quicker response, figure out the right person to tag with @

  @hwchase17 - project lead

  Tracing / Callbacks
  - @agola11

  Async
  - @agola11

  DataLoaders
  - @eyurtsev

  Models
  - @hwchase17
  - @agola11

  Agents / Tools / Toolkits
  - @vowelparrot

  VectorStores / Retrievers / Memory
  - @dev2049
        
 -->
2023-05-27 21:16:24 -07:00
mbchang
f079cdf479
fix: remove empty lines that cause InvalidRequestError (#5320)
# remove empty lines in GenerativeAgentMemory that cause
InvalidRequestError in OpenAIEmbeddings

<!--
Thank you for contributing to LangChain! Your PR will appear in our
release under the title you set. Please make sure it highlights your
valuable contribution.

Replace this with a description of the change, the issue it fixes (if
applicable), and relevant context. List any dependencies required for
this change.

After you're done, someone will review your PR. They may suggest
improvements. If no one reviews your PR within a few days, feel free to
@-mention the same people again, as notifications can get lost.
-->

<!-- Remove if not applicable -->

Let's say the text given to `GenerativeAgent._parse_list` is
```
text = """
Insight 1: <insight 1>

Insight 2: <insight 2>
"""
```
This creates an `openai.error.InvalidRequestError: [''] is not valid
under any of the given schemas - 'input'` because
`GenerativeAgent.add_memory()` tries to add an empty string to the
vectorstore.

This PR fixes the issue by removing the empty line between `Insight 1`
and `Insight 2`

## Before submitting

<!-- If you're adding a new integration, please include:

1. a test for the integration - favor unit tests that does not rely on
network access.
2. an example notebook showing its use


See contribution guidelines for more information on how to write tests,
lint
etc:


https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->

## Who can review?

Community members can review the PR once tests pass. Tag
maintainers/contributors who might be interested:

<!-- For a quicker response, figure out the right person to tag with @

  @hwchase17 - project lead

  Tracing / Callbacks
  - @agola11

  Async
  - @agola11

  DataLoaders
  - @eyurtsev

  Models
  - @hwchase17
  - @agola11

  Agents / Tools / Toolkits
  - @vowelparrot

  VectorStores / Retrievers / Memory
  - @dev2049
        
 -->
@hwchase17
@vowelparrot
@dev2049
2023-05-27 21:15:03 -07:00
Deepak S V
c6e5d90eff
Fixing blank thoughts in verbose for "_Exception" Action (#5331)
Fixed the issue of blank Thoughts being printed in verbose when
`handle_parsing_errors=True`, as below:

Before Fix:
```
Observation: There are 38175 accounts available in the dataframe.
Thought:
Observation: Invalid or incomplete response
Thought:
Observation: Invalid or incomplete response
Thought:
```

After Fix:
```
Observation: There are 38175 accounts available in the dataframe.
Thought:AI: {
    "action": "Final Answer",
    "action_input": "There are 38175 accounts available in the dataframe."
}
Observation: Invalid Action or Action Input format
Thought:AI: {
    "action": "Final Answer",
    "action_input": "The number of available accounts is 38175."
}
Observation: Invalid Action or Action Input format
```

@vowelparrot currently I have set the colour of thought to green (same
as the colour when `handle_parsing_errors=False`). If you want to change
the colour of this "_Exception" case to red or something else (when
`handle_parsing_errors=True`), feel free to change it in line 789.
2023-05-27 21:14:16 -07:00
DanConstantini
c49c6ac97a
Add Chainlit to deployment options (#5314)
# Add Chainlit to deployment options

Add [Chainlit](https://github.com/Chainlit/chainlit) as deployment
options
Used links to Github examples and Chainlit doc on the LangChain
integration

Co-authored-by: Dan Constantini <danconstantini@Dan-Constantini-MacBook.local>
2023-05-27 21:12:53 -07:00
Harrison Chase
5292e855c0
add enum output parser (#5165) 2023-05-27 20:59:24 -07:00
Harrison Chase
179ddbe88b
add enum output parser (#5165) 2023-05-27 20:58:23 -07:00
Leonid Ganeline
465a970724
docs: added link to LangChain Handbook (#5311)
# added a link to LangChain Handbook

## Who can review?

Community members can review the PR once tests pass.
2023-05-27 20:57:40 -07:00
Russ
6e974b5f04
Fix typos (#5323)
# Documentation typo fixes

Fixes # (issue)

Simple typos in the blockchain .ipynb documentation
2023-05-26 18:55:21 -07:00
Michael Landis
f75f0dbad6
docs: improve flow of llm caching notebook (#5309)
# docs: improve flow of llm caching notebook

The notebook `llm_caching` demos various caching providers. In the
previous version, there was setup common to all examples but under the
`In Memory Caching` heading.

If a user comes and only wants to try a particular example, they will
run the common setup, then the cells for the specific provider they are
interested in. Then they will get import and variable reference errors.
This commit moves the common setup to the top to avoid this.

## Who can review?

Community members can review the PR once tests pass. Tag
maintainers/contributors who might be interested:

@dev2049
2023-05-26 13:34:11 -04:00
Eugene Yurtsev
0a8d6bc402
Add instructions to pyproject.toml (#5138)
# Add instructions to pyproject.toml

* Add instructions to pyproject.toml about how to handle optional
dependencies.

## Before submitting


## Who can review?

---------

Co-authored-by: Davis Chase <130488702+dev2049@users.noreply.github.com>
Co-authored-by: Zander Chase <130414180+vowelparrot@users.noreply.github.com>
2023-05-26 13:29:07 -04:00
Shukri
58e95cd11e
Better docs for weaviate hybrid search (#5290)
# Better docs for weaviate hybrid search

<!--
Thank you for contributing to LangChain! Your PR will appear in our next
release under the title you set. Please make sure it highlights your
valuable contribution.

Replace this with a description of the change, the issue it fixes (if
applicable), and relevant context. List any dependencies required for
this change.

After you're done, someone will review your PR. They may suggest
improvements. If no one reviews your PR within a few days, feel free to
@-mention the same people again, as notifications can get lost.
-->

<!-- Remove if not applicable -->

Fixes: NA

## Before submitting

<!-- If you're adding a new integration, include an integration test and
an example notebook showing its use! -->

## Who can review?

Community members can review the PR once tests pass. Tag
maintainers/contributors who might be interested:

<!-- For a quicker response, figure out the right person to tag with @

        @hwchase17 - project lead

        Tracing / Callbacks
        - @agola11

        Async
        - @agola11

        DataLoaders
        - @eyurtsev

        Models
        - @hwchase17
        - @agola11

        Agents / Tools / Toolkits
        - @vowelparrot
        
        VectorStores / Retrievers / Memory
        - @dev2049
        
 -->
@dev2049
2023-05-26 09:30:41 -07:00
Davis Chase
641303a361
bump 181 (#5302) 2023-05-26 08:44:19 -07:00
Leonid Kuligin
aa3c7b3271
Fixed passing creds to VertexAI LLM (#5297)
# Fixed passing creds to VertexAI LLM

Fixes  #5279 

It looks like we should drop a type annotation for Credentials.

Co-authored-by: Leonid Kuligin <kuligin@google.com>
2023-05-26 08:31:02 -07:00
Eugene Yurtsev
a669abf16b
Update CONTRIBUTION guidelines and PR Template (#5140)
# Update contribution guidelines and PR template

This PR updates the contribution guidelines to include more information
on how to handle optional dependencies. 

The PR template is updated to include a link to the contribution guidelines document.
2023-05-26 10:18:11 -04:00
Peng Qu
d481d887bc
Add an example to make the prompt more robust (#5291)
# Add example to LLMMath to help with power operator

Add example to LLMMath that helps the model to interpret `^` as the power operator rather than the python xor operator.
2023-05-26 09:32:35 -04:00
Xiangrui Meng
aec642febb
LLM wrapper for Databricks (#5142)
This PR adds LLM wrapper for Databricks. It supports two endpoint types:
* serving endpoint
* cluster driver proxy app

An integration notebook is included to show how it works.


Co-authored-by: Davis Chase <130488702+dev2049@users.noreply.github.com>
Co-authored-by: Gengliang Wang <gengliang@apache.org>
Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
2023-05-25 19:19:37 -07:00
Ted Martinez
1cb6498fdb
Tedma4/twilio tool (#5136)
# Add twilio sms tool

---------

Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
2023-05-25 19:19:22 -07:00
Moonsik Kang
a0281f5acb
Fixed typo: 'ouput' to 'output' in all documentation (#5272)
# Fixed typo: 'ouput' to 'output' in all documentation

In this instance, the typo 'ouput' was amended to 'output' in all
occurrences within the documentation. There are no dependencies required
for this change.
2023-05-25 19:18:31 -07:00
Michael Landis
7047a2c1af
feat: add Momento as a standard cache and chat message history provider (#5221)
# Add Momento as a standard cache and chat message history provider

This PR adds Momento as a standard caching provider. Implements the
interface, adds integration tests, and documentation. We also add
Momento as a chat history message provider along with integration tests,
and documentation.

[Momento](https://www.gomomento.com/) is a fully serverless cache.
Similar to S3 or DynamoDB, it requires zero configuration,
infrastructure management, and is instantly available. Users sign up for
free and get 50GB of data in/out for free every month.

## Before submitting

 We have added documentation, notebooks, and integration tests
demonstrating usage.

Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
2023-05-25 19:13:21 -07:00
Hassan Ouda
56ad56c812
Support bigquery dialect - SQL (#5261)
# Your PR Title (What it does)

Adding an if statement to deal with bigquery sql dialect. When I use
bigquery dialect before, it failed while using SET search_path TO. So
added a condition to set dataset as the schema parameter which is
equivalent to SET search_path TO . I have tested and it works.


## Who can review?

Community members can review the PR once tests pass. Tag
maintainers/contributors who might be interested:
@dev2049
2023-05-25 18:19:17 -07:00
Abdelsalam ElTamawy
2ef5579eae
Added pipline args to HuggingFacePipeline.from_model_id (#5268)
The current `HuggingFacePipeline.from_model_id` does not allow passing
of pipeline arguments to the transformer pipeline.
This PR enables adding important pipeline parameters like setting
`max_new_tokens` for example.
Previous to this PR it would be necessary to manually create the
pipeline through huggingface transformers then handing it to langchain.

For example instead of this
```py
model_id = "gpt2"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
pipe = pipeline(
    "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=10
)
hf = HuggingFacePipeline(pipeline=pipe)
```
You can write this
```py
hf = HuggingFacePipeline.from_model_id(
    model_id="gpt2", task="text-generation", pipeline_kwargs={"max_new_tokens": 10}
)
```


Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
2023-05-25 17:54:52 -07:00
Davis Chase
f01dfe858d
OpenAI lint (#5273)
Causing lint issues if you have openai installed, annoying for local dev
2023-05-25 16:20:06 -07:00
Nicholas Liu
7652d2abb0
Add Multi-CSV/DF support in CSV and DataFrame Toolkits (#5009)
Add Multi-CSV/DF support in CSV and DataFrame Toolkits
* CSV and DataFrame toolkits now accept list of CSVs/DFs
* Add default prompts for many dataframes in `pandas_dataframe` toolkit

Fixes #1958
Potentially fixes #4423

## Testing
* Add single and multi-dataframe integration tests for
`pandas_dataframe` toolkit with permutations of `include_df_in_prompt`
* Add single and multi-CSV integration tests for csv toolkit
---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-05-25 14:23:11 -07:00
Alex Rothberg
3223a97dc6
Add visible_only and strict_mode options to ClickTool (#4088)
Partially addresses: https://github.com/hwchase17/langchain/issues/4066
2023-05-25 14:10:39 -07:00
Ravindra Marella
b3988621c5
Add C Transformers for GGML Models (#5218)
# Add C Transformers for GGML Models
I created Python bindings for the GGML models:
https://github.com/marella/ctransformers

Currently it supports GPT-2, GPT-J, GPT-NeoX, LLaMA, MPT, etc. See
[Supported
Models](https://github.com/marella/ctransformers#supported-models).


It provides a unified interface for all models:

```python
from langchain.llms import CTransformers

llm = CTransformers(model='/path/to/ggml-gpt-2.bin', model_type='gpt2')

print(llm('AI is going to'))
```

It can be used with models hosted on the Hugging Face Hub:

```py
llm = CTransformers(model='marella/gpt-2-ggml')
```

It supports streaming:

```py
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler

llm = CTransformers(model='marella/gpt-2-ggml', callbacks=[StreamingStdOutCallbackHandler()])
```

Please see [README](https://github.com/marella/ctransformers#readme) for
more details.
---------

Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
2023-05-25 13:42:44 -07:00
Davis Chase
ca88b25da6
Zep sdk version (#5267)
zep-python's sync methods no longer need an asyncio wrapper. This was
causing issues with FastAPI deployment.
Zep also now supports putting and getting of arbitrary message metadata.

Bump zep-python version to v0.30

Remove nest-asyncio from Zep example notebooks.

Modify tests to include metadata.

---------

Co-authored-by: Daniel Chalef <daniel.chalef@private.org>
Co-authored-by: Daniel Chalef <131175+danielchalef@users.noreply.github.com>
2023-05-25 13:42:10 -07:00
Janil Wörst
5525602df0
Docs link custom agent page in getting started (#5250)
# Docs: link custom agent page in getting started
2023-05-25 13:11:30 -07:00
Alon Diament
d3cd21ccf8
Fixed regression in JoplinLoader's get note url (#5265)
Fixes a regression in JoplinLoader that was introduced during the code
review (bad `page` wildcard in _get_note_url).

## Who can review?

Community members can review the PR once tests pass. Tag
maintainers/contributors who might be interested:

@dev2049
@leo-gan
2023-05-25 13:10:10 -07:00
Davis Chase
3be9ba14f3
OpenSearch top k parameter fix (#5216)
For most queries it's the `size` parameter that determines final number
of documents to return. Since our abstractions refer to this as `k`, set
this to be `k` everywhere instead of expecting a separate param. Would
be great to have someone more familiar with OpenSearch validate that
this is reasonable (e.g. that having `size` and what OpenSearch calls
`k` be the same won't lead to any strange behavior). cc @naveentatikonda

Closes #5212
2023-05-25 09:51:23 -07:00
Yves Maurer
88ed8e1cd6
Added the option of specifying a proxy for the OpenAI API (#5246)
# Added the option of specifying a proxy for the OpenAI API

Fixes #5243

Co-authored-by: Yves Maurer <>
2023-05-25 09:50:25 -07:00
mwinterde
9c0cb90997
Resolve error in StructuredOutputParser docs (#5240)
# Resolve error in StructuredOutputParser docs

Documentation for `StructuredOutputParser` currently not reproducible,
that is, `output_parser.parse(output)` raises an error because the LLM
returns a response with an invalid format

```python
_input = prompt.format_prompt(question="what's the capital of france")
output = model(_input.to_string())

output

# ?
#
# ```json
# {
# 	"answer": "Paris",
# 	"source": "https://www.worldatlas.com/articles/what-is-the-capital-of-france.html"
# }
# ```
```

Was fixed by adding a question mark to the prompt
2023-05-25 07:47:25 -07:00
Peng Qu
c7e2151a4b
remove extra "\n" to ensure that the format of the description, examp… (#5232)
remove extra "\n" to ensure that the format of the description, example,
and prompt&generation are completely consistent.
2023-05-25 07:46:39 -07:00
Davis Chase
15b17f9334
bump 180 (#5248) 2023-05-25 07:09:50 -07:00