docs: compound ecosystem and integrations (#4870)

# Docs: compound ecosystem and integrations

**Problem statement:** We have a big overlap between the
References/Integrations and Ecosystem/LongChain Ecosystem pages. It
confuses users. It creates a situation when new integration is added
only on one of these pages, which creates even more confusion.
- removed References/Integrations page (but move all its information
into the individual integration pages - in the next PR).
- renamed Ecosystem/LongChain Ecosystem into Integrations/Integrations.
I like the Ecosystem term. It is more generic and semantically richer
than the Integration term. But it mentally overloads users. The
`integration` term is more concrete.
UPDATE: after discussion, the Ecosystem is the term.
Ecosystem/Integrations is the page (in place of Ecosystem/LongChain
Ecosystem).

As a result, a user gets a single place to start with the individual
integration.
docker
Leonid Ganeline 1 year ago committed by GitHub
parent d5a0704544
commit e2d7677526
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

@ -134,25 +134,24 @@ Reference Docs
:hidden:
./reference/installation.md
./reference/integrations.md
./reference.rst
LangChain Ecosystem
-------------------
Ecosystem
------------
| Guides for how other companies/products can be used with LangChain.
| Guides for how other products can be used with LangChain.
- `LangChain Ecosystem <./ecosystem.html>`_
- `Integrations <./integrations.html>`_
.. toctree::
:maxdepth: 1
:maxdepth: 2
:glob:
:caption: Ecosystem
:name: ecosystem
:hidden:
./ecosystem.rst
./integrations.rst
Additional Resources

@ -1,12 +1,13 @@
LangChain Ecosystem
Integrations
===================
Guides for how other companies/products can be used with LangChain
LangChain integrates with many LLMs, systems, and products.
Groups
----------
Integrations by Module
--------------------------------
Integrations grouped by the core LangChain module they map to:
LangChain provides integration with many LLMs and systems:
- `LLM Providers <./modules/models/llms/integrations.html>`_
- `Chat Model Providers <./modules/models/chat/integrations.html>`_
@ -18,12 +19,15 @@ LangChain provides integration with many LLMs and systems:
- `Tool Providers <./modules/agents/tools.html>`_
- `Toolkit Integrations <./modules/agents/toolkits.html>`_
Companies / Products
--------------------
All Integrations
-------------------------------------------
A comprehensive list of LLMs, systems, and products integrated with LangChain:
.. toctree::
:maxdepth: 1
:glob:
ecosystem/*
integrations/*

@ -1,4 +1,4 @@
# Google Search Wrapper
# Google Search
This page covers how to use the Google Search API within LangChain.
It is broken into two parts: installation and setup, and then references to the specific Google Search wrapper.

@ -1,4 +1,4 @@
# Google Serper Wrapper
# Google Serper
This page covers how to use the [Serper](https://serper.dev) Google Search API within LangChain. Serper is a low-cost Google Search API that can be used to add answer box, knowledge graph, and organic results data from Google Search.
It is broken into two parts: setup, and then references to the specific Google Serper wrapper.

@ -118,7 +118,7 @@ Below is a list of all supported tools and relevant information:
- Notes: Uses the Google Custom Search API
- Requires LLM: No
- Extra Parameters: `google_api_key`, `google_cse_id`
- For more information on this, see [this page](../../../ecosystem/google_search.md)
- For more information on this, see [this page](../../../integrations/google_search.md)
**searx-search**
@ -135,7 +135,7 @@ Below is a list of all supported tools and relevant information:
- Notes: Calls the [serper.dev](https://serper.dev) Google Search API and then parses results.
- Requires LLM: No
- Extra Parameters: `serper_api_key`
- For more information on this, see [this page](../../../ecosystem/google_serper.md)
- For more information on this, see [this page](../../../integrations/google_serper.md)
**wikipedia**

@ -1,68 +0,0 @@
# Integrations
Besides the installation of this python package, you will also need to install packages and set environment variables depending on which chains you want to use.
Note: the reason these packages are not included in the dependencies by default is that as we imagine scaling this package, we do not want to force dependencies that are not needed.
The following use cases require specific installs and api keys:
- _OpenAI_:
- Install requirements with `pip install openai`
- Get an OpenAI api key and either set it as an environment variable (`OPENAI_API_KEY`) or pass it to the LLM constructor as `openai_api_key`.
- _Cohere_:
- Install requirements with `pip install cohere`
- Get a Cohere api key and either set it as an environment variable (`COHERE_API_KEY`) or pass it to the LLM constructor as `cohere_api_key`.
- _GooseAI_:
- Install requirements with `pip install openai`
- Get an GooseAI api key and either set it as an environment variable (`GOOSEAI_API_KEY`) or pass it to the LLM constructor as `gooseai_api_key`.
- _Hugging Face Hub_
- Install requirements with `pip install huggingface_hub`
- Get a Hugging Face Hub api token and either set it as an environment variable (`HUGGINGFACEHUB_API_TOKEN`) or pass it to the LLM constructor as `huggingfacehub_api_token`.
- _Petals_:
- Install requirements with `pip install petals`
- Get an GooseAI api key and either set it as an environment variable (`HUGGINGFACE_API_KEY`) or pass it to the LLM constructor as `huggingface_api_key`.
- _CerebriumAI_:
- Install requirements with `pip install cerebrium`
- Get a Cerebrium api key and either set it as an environment variable (`CEREBRIUMAI_API_KEY`) or pass it to the LLM constructor as `cerebriumai_api_key`.
- _PromptLayer_:
- Install requirements with `pip install promptlayer` (be sure to be on version 0.1.62 or higher)
- Get an API key from [promptlayer.com](http://www.promptlayer.com) and set it using `promptlayer.api_key=<API KEY>`
- _SerpAPI_:
- Install requirements with `pip install google-search-results`
- Get a SerpAPI api key and either set it as an environment variable (`SERPAPI_API_KEY`) or pass it to the LLM constructor as `serpapi_api_key`.
- _GoogleSearchAPI_:
- Install requirements with `pip install google-api-python-client`
- Get a Google api key and either set it as an environment variable (`GOOGLE_API_KEY`) or pass it to the LLM constructor as `google_api_key`. You will also need to set the `GOOGLE_CSE_ID` environment variable to your custom search engine id. You can pass it to the LLM constructor as `google_cse_id` as well.
- _WolframAlphaAPI_:
- Install requirements with `pip install wolframalpha`
- Get a Wolfram Alpha api key and either set it as an environment variable (`WOLFRAM_ALPHA_APPID`) or pass it to the LLM constructor as `wolfram_alpha_appid`.
- _NatBot_:
- Install requirements with `pip install playwright`
- _Wikipedia_:
- Install requirements with `pip install wikipedia`
- _Elasticsearch_:
- Install requirements with `pip install elasticsearch`
- Set up Elasticsearch backend. If you want to do locally, [this](https://www.elastic.co/guide/en/elasticsearch/reference/7.17/getting-started.html) is a good guide.
- _FAISS_:
- Install requirements with `pip install faiss` for Python 3.7 and `pip install faiss-cpu` for Python 3.10+.
- _MyScale_
- Install requirements with `pip install clickhouse-connect`. For documentations, please refer to [this document](https://docs.myscale.com/en/overview/).
- _Manifest_:
- Install requirements with `pip install manifest-ml` (Note: this is only available in Python 3.8+ currently).
- _OpenSearch_:
- Install requirements with `pip install opensearch-py`
- If you want to set up OpenSearch on your local, [here](https://opensearch.org/docs/latest/)
- _DeepLake_:
- Install requirements with `pip install deeplake`
- _LlamaCpp_:
- Install requirements with `pip install llama-cpp-python`
- Download model and convert following [llama.cpp instructions](https://github.com/ggerganov/llama.cpp)
- _Milvus_:
- Install requirements with `pip install pymilvus`
- In order to setup a local cluster, take a look [here](https://milvus.io/docs).
- _Zilliz_:
- Install requirements with `pip install pymilvus`
- To get up and running, take a look [here](https://zilliz.com/doc/quick_start).
If you are using the `NLTKTextSplitter` or the `SpacyTextSplitter`, you will also need to install the appropriate models. For example, if you want to use the `SpacyTextSplitter`, you will need to install the `en_core_web_sm` model with `python -m spacy download en_core_web_sm`. Similarly, if you want to use the `NLTKTextSplitter`, you will need to install the `punkt` model with `python -m nltk.downloader punkt`.

@ -315,7 +315,7 @@ class BaseOutputParser(BaseModel, ABC, Generic[T]):
def parse(self, text: str) -> T:
"""Parse the output of an LLM call.
A method which takes in a string (assumed output of language model )
A method which takes in a string (assumed output of a language model )
and parses it into some structure.
Args:

@ -7,7 +7,7 @@ supports the `OpenSearch
<https://github.com/dewitt/opensearch/blob/master/opensearch-1-1-draft-6.md>`_
specification.
More detailes on the installtion instructions `here. <../../ecosystem/searx.html>`_
More detailes on the installtion instructions `here. <../../integrations/searx.html>`_
For the search API refer to https://docs.searxng.org/dev/search_api.html

Loading…
Cancel
Save