Commit Graph

1028 Commits

Author SHA1 Message Date
UmerHA
8259f9b7fa
DocumentLoader for GitHub (#5408)
# Creates GitHubLoader (#5257)

GitHubLoader is a DocumentLoader that loads issues and PRs from GitHub.

Fixes #5257

---------

Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
2023-05-29 20:11:21 -07:00
German Martin
0b3e0dd1d2
New Trello document loader (#4767)
# Added New Trello loader class and documentation

Simple Loader on top of py-trello wrapper. 
With a board name you can pull cards and to do some field parameter
tweaks on load operation.
I included documentation and examples.
Included unit test cases using patch and a fixture for py-trello client
class.

---------

Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
2023-05-29 19:47:56 -07:00
Harrison Chase
72f99ff953
Harrison/text splitter (#5417)
adds support for keeping separators around when using recursive text
splitter
2023-05-29 16:56:31 -07:00
小铭
cf5803e44c
Add ToolException that a tool can throw. (#5050)
# Add ToolException that a tool can throw
This is an optional exception that tool throws when execution error
occurs.
When this exception is thrown, the agent will not stop working,but will
handle the exception according to the handle_tool_error variable of the
tool,and the processing result will be returned to the agent as
observation,and printed in pink on the console.It can be used like this:
```python 
from langchain.schema import ToolException
from langchain import LLMMathChain, SerpAPIWrapper, OpenAI
from langchain.agents import AgentType, initialize_agent
from langchain.chat_models import ChatOpenAI
from langchain.tools import BaseTool, StructuredTool, Tool, tool
from langchain.chat_models import ChatOpenAI

llm = ChatOpenAI(temperature=0)
llm_math_chain = LLMMathChain(llm=llm, verbose=True)

class Error_tool:
    def run(self, s: str):
        raise ToolException('The current search tool is not available.')
    
def handle_tool_error(error) -> str:
    return "The following errors occurred during tool execution:"+str(error)

search_tool1 = Error_tool()
search_tool2 = SerpAPIWrapper()
tools = [
    Tool.from_function(
        func=search_tool1.run,
        name="Search_tool1",
        description="useful for when you need to answer questions about current events.You should give priority to using it.",
        handle_tool_error=handle_tool_error,
    ),
    Tool.from_function(
        func=search_tool2.run,
        name="Search_tool2",
        description="useful for when you need to answer questions about current events",
        return_direct=True,
    )
]
agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True,
                         handle_tool_errors=handle_tool_error)
agent.run("Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?")
```

![image](https://github.com/hwchase17/langchain/assets/32786500/51930410-b26e-4f85-a1e1-e6a6fb450ada)

## Who can review?
- @vowelparrot

---------

Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
2023-05-29 20:05:58 +00:00
Harrison Chase
2da8c48be1
Harrison/datetime parser (#4693)
Co-authored-by: Jacob Valdez <jacobfv@msn.com>
Co-authored-by: Jacob Valdez <jacob.valdez@limboid.ai>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2023-05-29 07:52:30 -07:00
Leonid Ganeline
1837caa70d
docs: ecosystem/integrations update 1 (#5219)
# docs: ecosystem/integrations update

It is the first in a series of `ecosystem/integrations` updates.

The ecosystem/integrations list is missing many integrations.
I'm adding the missing integrations in a consistent format: 
1. description of the integrated system
2. `Installation and Setup` section with 'pip install ...`, Key setup,
and other necessary settings
3. Sections like `LLM`, `Text Embedding Models`, `Chat Models`... with
links to correspondent examples and imports of the used classes.

This PR keeps new docs, that are presented in the
`docs/modules/models/text_embedding/examples` but missed in the
`ecosystem/integrations`. The next PRs will cover the next example
sections.

Also updated `integrations.rst`: added the `Dependencies` section with a
link to the packages used in LangChain.

## Who can review?

@hwchase17
@eyurtsev
@dev2049
2023-05-29 07:25:17 -07:00
Leonid Ganeline
a3598193a0
docs: ecosystem/integrations update 2 (#5282)
# docs: ecosystem/integrations update 2

#5219 - part 1 
The second part of this update (parts are independent of each other! no
overlap):

- added diffbot.md
- updated confluence.ipynb; added confluence.md
- updated college_confidential.md
- updated openai.md
- added blackboard.md
- added bilibili.md
- added azure_blob_storage.md
- added azlyrics.md
- added aws_s3.md

## Who can review?

@hwchase17@agola11
@agola11
 @vowelparrot
 @dev2049
2023-05-29 07:19:43 -07:00
Harrison Chase
d6fb25c439
Harrison/prediction guard update (#5404)
Co-authored-by: Daniel Whitenack <whitenack.daniel@gmail.com>
2023-05-29 07:14:59 -07:00
Harrison Chase
416c8b1da3
Harrison/deep infra (#5403)
Co-authored-by: Yessen Kanapin <yessenzhar@gmail.com>
Co-authored-by: Yessen Kanapin <yessen@deepinfra.com>
2023-05-29 07:10:50 -07:00
Timothy Ji
100d6655df
Reformat openai proxy setting as code (#5330)
# Reformat the openai proxy setting as code


  Only affect the doc for openai Model
  - @hwchase17
  - @agola11
2023-05-29 07:02:47 -07:00
Oleh Kuznetsov
f6615cac41
Update llamacpp demonstration notebook (#5344)
# Update llamacpp demonstration notebook

Add instructions to install with BLAS backend, and update the example of
model usage.

Fixes #5071. However, it is more like a prevention of similar issues in
the future, not a fix, since there was no problem in the framework
functionality

## Who can review?

Community members can review the PR once tests pass. Tag
maintainers/contributors who might be interested:

- @hwchase17 
- @agola11
2023-05-29 06:43:26 -07:00
Martin Holecek
44b48d9518
Fix update_document function, add test and documentation. (#5359)
# Fix for `update_document` Function in Chroma

## Summary
This pull request addresses an issue with the `update_document` function
in the Chroma class, as described in
[#5031](https://github.com/hwchase17/langchain/issues/5031#issuecomment-1562577947).
The issue was identified as an `AttributeError` raised when calling
`update_document` due to a missing corresponding method in the
`Collection` object. This fix refactors the `update_document` method in
`Chroma` to correctly interact with the `Collection` object.

## Changes
1. Fixed the `update_document` method in the `Chroma` class to correctly
call methods on the `Collection` object.
2. Added the corresponding test `test_chroma_update_document` in
`tests/integration_tests/vectorstores/test_chroma.py` to reflect the
updated method call.
3. Added an example and explanation of how to use the `update_document`
function in the Jupyter notebook tutorial for Chroma.

## Test Plan
All existing tests pass after this change. In addition, the
`test_chroma_update_document` test case now correctly checks the
functionality of `update_document`, ensuring that the function works as
expected and updates the content of documents correctly.

## Reviewers
@dev2049

This fix will ensure that users are able to use the `update_document`
function as expected, without encountering the previous
`AttributeError`. This will enhance the usability and reliability of the
Chroma class for all users.

Thank you for considering this pull request. I look forward to your
feedback and suggestions.
2023-05-29 06:39:25 -07:00
Janos Tolgyesi
5f4552391f
Add SKLearnVectorStore (#5305)
# Add SKLearnVectorStore

This PR adds SKLearnVectorStore, a simply vector store based on
NearestNeighbors implementations in the scikit-learn package. This
provides a simple drop-in vector store implementation with minimal
dependencies (scikit-learn is typically installed in a data scientist /
ml engineer environment). The vector store can be persisted and loaded
from json, bson and parquet format.

SKLearnVectorStore has soft (dynamic) dependency on the scikit-learn,
numpy and pandas packages. Persisting to bson requires the bson package,
persisting to parquet requires the pyarrow package.

## Before submitting

Integration tests are provided under
`tests/integration_tests/vectorstores/test_sklearn.py`

Sample usage notebook is provided under
`docs/modules/indexes/vectorstores/examples/sklear.ipynb`

Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
2023-05-28 08:17:42 -07:00
Kenton
881dfe8179
Sample Notebook for DynamoDB Chat Message History (#5351)
# Sample Notebook for DynamoDB Chat Message History

@dev2049

Adding a sample notebook for the DynamoDB Chat Message History class.

<!-- For a quicker response, figure out the right person to tag with @

  @hwchase17 - project lead

  Tracing / Callbacks
  - @agola11

  Async
  - @agola11

  DataLoaders
  - @eyurtsev

  Models
  - @hwchase17
  - @agola11

  Agents / Tools / Toolkits
  - @vowelparrot

  VectorStores / Retrievers / Memory
  - @dev2049
        
 -->
2023-05-27 21:16:24 -07:00
DanConstantini
c49c6ac97a
Add Chainlit to deployment options (#5314)
# Add Chainlit to deployment options

Add [Chainlit](https://github.com/Chainlit/chainlit) as deployment
options
Used links to Github examples and Chainlit doc on the LangChain
integration

Co-authored-by: Dan Constantini <danconstantini@Dan-Constantini-MacBook.local>
2023-05-27 21:12:53 -07:00
Harrison Chase
179ddbe88b
add enum output parser (#5165) 2023-05-27 20:58:23 -07:00
Leonid Ganeline
465a970724
docs: added link to LangChain Handbook (#5311)
# added a link to LangChain Handbook

## Who can review?

Community members can review the PR once tests pass.
2023-05-27 20:57:40 -07:00
Russ
6e974b5f04
Fix typos (#5323)
# Documentation typo fixes

Fixes # (issue)

Simple typos in the blockchain .ipynb documentation
2023-05-26 18:55:21 -07:00
Michael Landis
f75f0dbad6
docs: improve flow of llm caching notebook (#5309)
# docs: improve flow of llm caching notebook

The notebook `llm_caching` demos various caching providers. In the
previous version, there was setup common to all examples but under the
`In Memory Caching` heading.

If a user comes and only wants to try a particular example, they will
run the common setup, then the cells for the specific provider they are
interested in. Then they will get import and variable reference errors.
This commit moves the common setup to the top to avoid this.

## Who can review?

Community members can review the PR once tests pass. Tag
maintainers/contributors who might be interested:

@dev2049
2023-05-26 13:34:11 -04:00
Shukri
58e95cd11e
Better docs for weaviate hybrid search (#5290)
# Better docs for weaviate hybrid search

<!--
Thank you for contributing to LangChain! Your PR will appear in our next
release under the title you set. Please make sure it highlights your
valuable contribution.

Replace this with a description of the change, the issue it fixes (if
applicable), and relevant context. List any dependencies required for
this change.

After you're done, someone will review your PR. They may suggest
improvements. If no one reviews your PR within a few days, feel free to
@-mention the same people again, as notifications can get lost.
-->

<!-- Remove if not applicable -->

Fixes: NA

## Before submitting

<!-- If you're adding a new integration, include an integration test and
an example notebook showing its use! -->

## Who can review?

Community members can review the PR once tests pass. Tag
maintainers/contributors who might be interested:

<!-- For a quicker response, figure out the right person to tag with @

        @hwchase17 - project lead

        Tracing / Callbacks
        - @agola11

        Async
        - @agola11

        DataLoaders
        - @eyurtsev

        Models
        - @hwchase17
        - @agola11

        Agents / Tools / Toolkits
        - @vowelparrot
        
        VectorStores / Retrievers / Memory
        - @dev2049
        
 -->
@dev2049
2023-05-26 09:30:41 -07:00
Xiangrui Meng
aec642febb
LLM wrapper for Databricks (#5142)
This PR adds LLM wrapper for Databricks. It supports two endpoint types:
* serving endpoint
* cluster driver proxy app

An integration notebook is included to show how it works.


Co-authored-by: Davis Chase <130488702+dev2049@users.noreply.github.com>
Co-authored-by: Gengliang Wang <gengliang@apache.org>
Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
2023-05-25 19:19:37 -07:00
Ted Martinez
1cb6498fdb
Tedma4/twilio tool (#5136)
# Add twilio sms tool

---------

Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
2023-05-25 19:19:22 -07:00
Moonsik Kang
a0281f5acb
Fixed typo: 'ouput' to 'output' in all documentation (#5272)
# Fixed typo: 'ouput' to 'output' in all documentation

In this instance, the typo 'ouput' was amended to 'output' in all
occurrences within the documentation. There are no dependencies required
for this change.
2023-05-25 19:18:31 -07:00
Michael Landis
7047a2c1af
feat: add Momento as a standard cache and chat message history provider (#5221)
# Add Momento as a standard cache and chat message history provider

This PR adds Momento as a standard caching provider. Implements the
interface, adds integration tests, and documentation. We also add
Momento as a chat history message provider along with integration tests,
and documentation.

[Momento](https://www.gomomento.com/) is a fully serverless cache.
Similar to S3 or DynamoDB, it requires zero configuration,
infrastructure management, and is instantly available. Users sign up for
free and get 50GB of data in/out for free every month.

## Before submitting

 We have added documentation, notebooks, and integration tests
demonstrating usage.

Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
2023-05-25 19:13:21 -07:00
Nicholas Liu
7652d2abb0
Add Multi-CSV/DF support in CSV and DataFrame Toolkits (#5009)
Add Multi-CSV/DF support in CSV and DataFrame Toolkits
* CSV and DataFrame toolkits now accept list of CSVs/DFs
* Add default prompts for many dataframes in `pandas_dataframe` toolkit

Fixes #1958
Potentially fixes #4423

## Testing
* Add single and multi-dataframe integration tests for
`pandas_dataframe` toolkit with permutations of `include_df_in_prompt`
* Add single and multi-CSV integration tests for csv toolkit
---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-05-25 14:23:11 -07:00
Ravindra Marella
b3988621c5
Add C Transformers for GGML Models (#5218)
# Add C Transformers for GGML Models
I created Python bindings for the GGML models:
https://github.com/marella/ctransformers

Currently it supports GPT-2, GPT-J, GPT-NeoX, LLaMA, MPT, etc. See
[Supported
Models](https://github.com/marella/ctransformers#supported-models).


It provides a unified interface for all models:

```python
from langchain.llms import CTransformers

llm = CTransformers(model='/path/to/ggml-gpt-2.bin', model_type='gpt2')

print(llm('AI is going to'))
```

It can be used with models hosted on the Hugging Face Hub:

```py
llm = CTransformers(model='marella/gpt-2-ggml')
```

It supports streaming:

```py
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler

llm = CTransformers(model='marella/gpt-2-ggml', callbacks=[StreamingStdOutCallbackHandler()])
```

Please see [README](https://github.com/marella/ctransformers#readme) for
more details.
---------

Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
2023-05-25 13:42:44 -07:00
Davis Chase
ca88b25da6
Zep sdk version (#5267)
zep-python's sync methods no longer need an asyncio wrapper. This was
causing issues with FastAPI deployment.
Zep also now supports putting and getting of arbitrary message metadata.

Bump zep-python version to v0.30

Remove nest-asyncio from Zep example notebooks.

Modify tests to include metadata.

---------

Co-authored-by: Daniel Chalef <daniel.chalef@private.org>
Co-authored-by: Daniel Chalef <131175+danielchalef@users.noreply.github.com>
2023-05-25 13:42:10 -07:00
Janil Wörst
5525602df0
Docs link custom agent page in getting started (#5250)
# Docs: link custom agent page in getting started
2023-05-25 13:11:30 -07:00
Davis Chase
3be9ba14f3
OpenSearch top k parameter fix (#5216)
For most queries it's the `size` parameter that determines final number
of documents to return. Since our abstractions refer to this as `k`, set
this to be `k` everywhere instead of expecting a separate param. Would
be great to have someone more familiar with OpenSearch validate that
this is reasonable (e.g. that having `size` and what OpenSearch calls
`k` be the same won't lead to any strange behavior). cc @naveentatikonda

Closes #5212
2023-05-25 09:51:23 -07:00
Yves Maurer
88ed8e1cd6
Added the option of specifying a proxy for the OpenAI API (#5246)
# Added the option of specifying a proxy for the OpenAI API

Fixes #5243

Co-authored-by: Yves Maurer <>
2023-05-25 09:50:25 -07:00
mwinterde
9c0cb90997
Resolve error in StructuredOutputParser docs (#5240)
# Resolve error in StructuredOutputParser docs

Documentation for `StructuredOutputParser` currently not reproducible,
that is, `output_parser.parse(output)` raises an error because the LLM
returns a response with an invalid format

```python
_input = prompt.format_prompt(question="what's the capital of france")
output = model(_input.to_string())

output

# ?
#
# ```json
# {
# 	"answer": "Paris",
# 	"source": "https://www.worldatlas.com/articles/what-is-the-capital-of-france.html"
# }
# ```
```

Was fixed by adding a question mark to the prompt
2023-05-25 07:47:25 -07:00
Shukri
09e246f306
Weaviate: Add QnA with sources example (#5247)
# Add QnA with sources example 

<!--
Thank you for contributing to LangChain! Your PR will appear in our next
release under the title you set. Please make sure it highlights your
valuable contribution.

Replace this with a description of the change, the issue it fixes (if
applicable), and relevant context. List any dependencies required for
this change.

After you're done, someone will review your PR. They may suggest
improvements. If no one reviews your PR within a few days, feel free to
@-mention the same people again, as notifications can get lost.
-->

<!-- Remove if not applicable -->

Fixes: see
https://stackoverflow.com/questions/76207160/langchain-doesnt-work-with-weaviate-vector-database-getting-valueerror/76210017#76210017

## Before submitting

<!-- If you're adding a new integration, include an integration test and
an example notebook showing its use! -->

## Who can review?

Community members can review the PR once tests pass. Tag
maintainers/contributors who might be interested:

<!-- For a quicker response, figure out the right person to tag with @

        @hwchase17 - project lead

        Tracing / Callbacks
        - @agola11

        Async
        - @agola11

        DataLoaders
        - @eyurtsev

        Models
        - @hwchase17
        - @agola11

        Agents / Tools / Toolkits
        - @vowelparrot
        
        VectorStores / Retrievers / Memory
        - @dev2049
        
 -->
@dev2049
2023-05-25 09:58:33 -04:00
Archon
5cdd9ab7e1
Add MiniMax embeddings (#5174)
- Add support for MiniMax embeddings

Doc: [MiniMax
embeddings](https://api.minimax.chat/document/guides/embeddings?id=6464722084cdc277dfaa966a)

---------

Co-authored-by: Archon <archongum@outlook.com>
Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
2023-05-25 06:57:49 -07:00
Eugene Yurtsev
5cfa72a130
Bibtex integration for document loader and retriever (#5137)
# Bibtex integration

Wrap bibtexparser to retrieve a list of docs from a bibtex file.
* Get the metadata from the bibtex entries
* `page_content` get from the local pdf referenced in the `file` field
of the bibtex entry using `pymupdf`
* If no valid pdf file, `page_content` set to the `abstract` field of
the bibtex entry
* Support Zotero flavour using regex to get the file path
* Added usage example in
`docs/modules/indexes/document_loaders/examples/bibtex.ipynb`
---------

Co-authored-by: Sébastien M. Popoff <sebastien.popoff@espci.fr>
Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
2023-05-25 00:21:31 -07:00
Keno
eff31a3361
Remove API key from docs (#5223)
I found an API key for `serpapi_api_key` while reading the docs. It
seems to have been modified very recently. Removed it in this PR
@hwchase17 - project lead
2023-05-24 22:25:39 -07:00
Leonid Ganeline
2ad29f410d
fix a mistake in concepts.md (#5222)
# fix a mistake in concepts.md


## Who can review?

Community members can review the PR once tests pass. Tag
maintainers/contributors who might be interested:
2023-05-24 21:47:22 -07:00
Harrison Chase
a775aa6389
Harrison/vertex (#5049)
Co-authored-by: Leonid Kuligin <kuligin@google.com>
Co-authored-by: Leonid Kuligin <lkuligin@yandex.ru>
Co-authored-by: sasha-gitg <44654632+sasha-gitg@users.noreply.github.com>
Co-authored-by: Justin Flick <Justinjayflick@gmail.com>
Co-authored-by: Justin Flick <jflick@homesite.com>
2023-05-24 15:51:12 -07:00
Davis Chase
dcee8936c1
nit (#5208) 2023-05-24 12:52:20 -07:00
Alon Diament
44abe925df
Add Joplin document loader (#5153)
# Add Joplin document loader

[Joplin](https://joplinapp.org/) is an open source note-taking app.

Joplin has a [REST API](https://joplinapp.org/api/references/rest_api/)
for accessing its local database. The proposed `JoplinLoader` uses the
API to retrieve all notes in the database and their metadata. Joplin
needs to be installed and running locally, and an access token is
required.

- The PR includes an integration test.
- The PR includes an example notebook.

---------

Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
2023-05-24 12:31:55 -07:00
Rodrigo Siqueira
f10be072ff
Add Iugu document loader (#5162)
Create IUGU loader
---------

Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
2023-05-24 11:47:01 -07:00
Davis Chase
2b2176a3c1
tfidf retriever (#5114)
Co-authored-by: vempaliakhil96 <vempaliakhil96@gmail.com>
2023-05-24 10:02:09 -07:00
Shukri
b00c77dc62
Improve weaviate vectorstore docs (#5201)
# Improve weaviate vectorstore docs
2023-05-24 09:31:48 -07:00
Harrison Chase
11c26ebb55
Harrison/modelscope (#5156)
Co-authored-by: thomas-yanxin <yx20001210@163.com>
Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
2023-05-24 08:06:45 -07:00
Jeff Vestal
cf19a2a59f
example usage (#5182)
Adding example usage for elasticsearch knn embeddings
[per](https://github.com/hwchase17/langchain/pull/3401#issuecomment-1548518389)


https://github.com/hwchase17/langchain/blob/master/langchain/embeddings/elasticsearch.py
2023-05-24 07:47:15 -07:00
Ikko Eltociear Ashimine
fff21a0b35
Update rellm_experimental.ipynb (#5189)
# Your PR Title (What it does)

HuggingFace -> Hugging Face
2023-05-24 11:41:00 +00:00
Nolan Tremelling
faa26650c9
Beam (#4996)
# Beam

Calls the Beam API wrapper to deploy and make subsequent calls to an
instance of the gpt2 LLM in a cloud deployment. Requires installation of
the Beam library and registration of Beam Client ID and Client Secret.
Additional calls can then be made through the instance of the large
language model in your code or by calling the Beam API.

---------

Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
2023-05-24 01:25:18 -07:00
Ofer Mendelevitch
c81fb88035
Vectara (#5069)
# Vectara Integration

This PR provides integration with Vectara. Implemented here are:
* langchain/vectorstore/vectara.py
* tests/integration_tests/vectorstores/test_vectara.py
* langchain/retrievers/vectara_retriever.py
And two IPYNB notebooks to do more testing:
* docs/modules/chains/index_examples/vectara_text_generation.ipynb
* docs/modules/indexes/vectorstores/examples/vectara.ipynb

---------

Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
2023-05-24 01:24:58 -07:00
Jason Bosco
9c4b43b494
Add Typesense vector store (#1674)
Closes #931.

---------

Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
2023-05-23 23:20:45 -07:00
Leonid Ganeline
33929489b9
docs: added missed document_loaders examples (#5150)
# DOCS added missed document_loader examples

Added missed examples: `JSON`, `Open Document Format (ODT)`,
`Wikipedia`, `tomarkdown`.
Updated them to a consistent format.

## Who can review?

@hwchase17 
@dev2049
2023-05-23 21:56:41 -07:00
Daniel Quinteros
c111134a55
Clarification of the reference to the "get_text_legth" function in ge… (#5154)
# Clarification of the reference to the "get_text_legth" function in
getting_started.md

Reference to the function "get_text_legth" in the documentation did not
make sense. Comment added for clarification.

@hwchase17
2023-05-23 20:43:38 -07:00