mirror of
https://github.com/hwchase17/langchain
synced 2024-11-13 19:10:52 +00:00
docs: integrations
updates 20 (#27210)
Added missed provider pages. Added descriptions and links. Co-authored-by: Erick Friis <erick@langchain.dev>
This commit is contained in:
parent
f3925d71b9
commit
fead4749b9
21
docs/docs/integrations/providers/konlpy.mdx
Normal file
21
docs/docs/integrations/providers/konlpy.mdx
Normal file
@ -0,0 +1,21 @@
|
||||
# KoNLPY
|
||||
|
||||
>[KoNLPy](https://konlpy.org/) is a Python package for natural language processing (NLP)
|
||||
> of the Korean language.
|
||||
|
||||
|
||||
## Installation and Setup
|
||||
|
||||
You need to install the `konlpy` python package.
|
||||
|
||||
```bash
|
||||
pip install konlpy
|
||||
```
|
||||
|
||||
## Text splitter
|
||||
|
||||
See a [usage example](/docs/how_to/split_by_token/#konlpy).
|
||||
|
||||
```python
|
||||
from langchain_text_splitters import KonlpyTextSplitter
|
||||
```
|
32
docs/docs/integrations/providers/kuzu.mdx
Normal file
32
docs/docs/integrations/providers/kuzu.mdx
Normal file
@ -0,0 +1,32 @@
|
||||
# Kùzu
|
||||
|
||||
>[Kùzu](https://kuzudb.com/) is a company based in Waterloo, Ontario, Canada.
|
||||
> It provides a highly scalable, extremely fast, easy-to-use [embeddable graph database](https://github.com/kuzudb/kuzu).
|
||||
|
||||
|
||||
|
||||
## Installation and Setup
|
||||
|
||||
You need to install the `kuzu` python package.
|
||||
|
||||
```bash
|
||||
pip install kuzu
|
||||
```
|
||||
|
||||
## Graph database
|
||||
|
||||
See a [usage example](/docs/integrations/graphs/kuzu_db).
|
||||
|
||||
```python
|
||||
from langchain_community.graphs import KuzuGraph
|
||||
```
|
||||
|
||||
## Chain
|
||||
|
||||
See a [usage example](/docs/integrations/graphs/kuzu_db/#creating-kuzuqachain).
|
||||
|
||||
```python
|
||||
from langchain.chains import KuzuQAChain
|
||||
```
|
||||
|
||||
|
32
docs/docs/integrations/providers/llama_index.mdx
Normal file
32
docs/docs/integrations/providers/llama_index.mdx
Normal file
@ -0,0 +1,32 @@
|
||||
# LlamaIndex
|
||||
|
||||
>[LlamaIndex](https://www.llamaindex.ai/) is the leading data framework for building LLM applications
|
||||
|
||||
|
||||
## Installation and Setup
|
||||
|
||||
You need to install the `llama-index` python package.
|
||||
|
||||
```bash
|
||||
pip install llama-index
|
||||
```
|
||||
|
||||
See the [installation instructions](https://docs.llamaindex.ai/en/stable/getting_started/installation/).
|
||||
|
||||
## Retrievers
|
||||
|
||||
### LlamaIndexRetriever
|
||||
|
||||
>It is used for the question-answering with sources over an LlamaIndex data structure.
|
||||
|
||||
```python
|
||||
from langchain_community.retrievers.llama_index import LlamaIndexRetriever
|
||||
```
|
||||
|
||||
### LlamaIndexGraphRetriever
|
||||
|
||||
>It is used for question-answering with sources over an LlamaIndex graph data structure.
|
||||
|
||||
```python
|
||||
from langchain_community.retrievers.llama_index import LlamaIndexGraphRetriever
|
||||
```
|
24
docs/docs/integrations/providers/llamaedge.mdx
Normal file
24
docs/docs/integrations/providers/llamaedge.mdx
Normal file
@ -0,0 +1,24 @@
|
||||
# LlamaEdge
|
||||
|
||||
>[LlamaEdge](https://llamaedge.com/docs/intro/) is the easiest & fastest way to run customized
|
||||
> and fine-tuned LLMs locally or on the edge.
|
||||
>
|
||||
>* Lightweight inference apps. `LlamaEdge` is in MBs instead of GBs
|
||||
>* Native and GPU accelerated performance
|
||||
>* Supports many GPU and hardware accelerators
|
||||
>* Supports many optimized inference libraries
|
||||
>* Wide selection of AI / LLM models
|
||||
|
||||
|
||||
|
||||
## Installation and Setup
|
||||
|
||||
See the [installation instructions](https://llamaedge.com/docs/user-guide/quick-start-command).
|
||||
|
||||
## Chat models
|
||||
|
||||
See a [usage example](/docs/integrations/chat/llama_edge).
|
||||
|
||||
```python
|
||||
from langchain_community.chat_models.llama_edge import LlamaEdgeChatService
|
||||
```
|
31
docs/docs/integrations/providers/llamafile.mdx
Normal file
31
docs/docs/integrations/providers/llamafile.mdx
Normal file
@ -0,0 +1,31 @@
|
||||
# llamafile
|
||||
|
||||
>[llamafile](https://github.com/Mozilla-Ocho/llamafile) lets you distribute and run LLMs
|
||||
> with a single file.
|
||||
|
||||
>`llamafile` makes open LLMs much more accessible to both developers and end users.
|
||||
> `llamafile` is doing that by combining [llama.cpp](https://github.com/ggerganov/llama.cpp) with
|
||||
> [Cosmopolitan Libc](https://github.com/jart/cosmopolitan) into one framework that collapses
|
||||
> all the complexity of LLMs down to a single-file executable (called a "llamafile")
|
||||
> that runs locally on most computers, with no installation.
|
||||
|
||||
|
||||
## Installation and Setup
|
||||
|
||||
See the [installation instructions](https://github.com/Mozilla-Ocho/llamafile?tab=readme-ov-file#quickstart).
|
||||
|
||||
## LLMs
|
||||
|
||||
See a [usage example](/docs/integrations/llms/llamafile).
|
||||
|
||||
```python
|
||||
from langchain_community.llms.llamafile import Llamafile
|
||||
```
|
||||
|
||||
## Embedding models
|
||||
|
||||
See a [usage example](/docs/integrations/text_embedding/llamafile).
|
||||
|
||||
```python
|
||||
from langchain_community.embeddings import LlamafileEmbeddings
|
||||
```
|
Loading…
Reference in New Issue
Block a user