docs: integrations updates 20 (#27210)

Added missed provider pages. Added descriptions and links.

Co-authored-by: Erick Friis <erick@langchain.dev>
This commit is contained in:
Leonid Ganeline 2024-10-15 09:38:12 -07:00 committed by GitHub
parent f3925d71b9
commit fead4749b9
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
5 changed files with 140 additions and 0 deletions

View File

@ -0,0 +1,21 @@
# KoNLPY
>[KoNLPy](https://konlpy.org/) is a Python package for natural language processing (NLP)
> of the Korean language.
## Installation and Setup
You need to install the `konlpy` python package.
```bash
pip install konlpy
```
## Text splitter
See a [usage example](/docs/how_to/split_by_token/#konlpy).
```python
from langchain_text_splitters import KonlpyTextSplitter
```

View File

@ -0,0 +1,32 @@
# Kùzu
>[Kùzu](https://kuzudb.com/) is a company based in Waterloo, Ontario, Canada.
> It provides a highly scalable, extremely fast, easy-to-use [embeddable graph database](https://github.com/kuzudb/kuzu).
## Installation and Setup
You need to install the `kuzu` python package.
```bash
pip install kuzu
```
## Graph database
See a [usage example](/docs/integrations/graphs/kuzu_db).
```python
from langchain_community.graphs import KuzuGraph
```
## Chain
See a [usage example](/docs/integrations/graphs/kuzu_db/#creating-kuzuqachain).
```python
from langchain.chains import KuzuQAChain
```

View File

@ -0,0 +1,32 @@
# LlamaIndex
>[LlamaIndex](https://www.llamaindex.ai/) is the leading data framework for building LLM applications
## Installation and Setup
You need to install the `llama-index` python package.
```bash
pip install llama-index
```
See the [installation instructions](https://docs.llamaindex.ai/en/stable/getting_started/installation/).
## Retrievers
### LlamaIndexRetriever
>It is used for the question-answering with sources over an LlamaIndex data structure.
```python
from langchain_community.retrievers.llama_index import LlamaIndexRetriever
```
### LlamaIndexGraphRetriever
>It is used for question-answering with sources over an LlamaIndex graph data structure.
```python
from langchain_community.retrievers.llama_index import LlamaIndexGraphRetriever
```

View File

@ -0,0 +1,24 @@
# LlamaEdge
>[LlamaEdge](https://llamaedge.com/docs/intro/) is the easiest & fastest way to run customized
> and fine-tuned LLMs locally or on the edge.
>
>* Lightweight inference apps. `LlamaEdge` is in MBs instead of GBs
>* Native and GPU accelerated performance
>* Supports many GPU and hardware accelerators
>* Supports many optimized inference libraries
>* Wide selection of AI / LLM models
## Installation and Setup
See the [installation instructions](https://llamaedge.com/docs/user-guide/quick-start-command).
## Chat models
See a [usage example](/docs/integrations/chat/llama_edge).
```python
from langchain_community.chat_models.llama_edge import LlamaEdgeChatService
```

View File

@ -0,0 +1,31 @@
# llamafile
>[llamafile](https://github.com/Mozilla-Ocho/llamafile) lets you distribute and run LLMs
> with a single file.
>`llamafile` makes open LLMs much more accessible to both developers and end users.
> `llamafile` is doing that by combining [llama.cpp](https://github.com/ggerganov/llama.cpp) with
> [Cosmopolitan Libc](https://github.com/jart/cosmopolitan) into one framework that collapses
> all the complexity of LLMs down to a single-file executable (called a "llamafile")
> that runs locally on most computers, with no installation.
## Installation and Setup
See the [installation instructions](https://github.com/Mozilla-Ocho/llamafile?tab=readme-ov-file#quickstart).
## LLMs
See a [usage example](/docs/integrations/llms/llamafile).
```python
from langchain_community.llms.llamafile import Llamafile
```
## Embedding models
See a [usage example](/docs/integrations/text_embedding/llamafile).
```python
from langchain_community.embeddings import LlamafileEmbeddings
```