mirror of
https://github.com/hwchase17/langchain
synced 2024-10-31 15:20:26 +00:00
102 lines
2.8 KiB
Plaintext
102 lines
2.8 KiB
Plaintext
# OpenAI
|
|
|
|
All functionality related to OpenAI
|
|
|
|
>[OpenAI](https://en.wikipedia.org/wiki/OpenAI) is American artificial intelligence (AI) research laboratory
|
|
> consisting of the non-profit `OpenAI Incorporated`
|
|
> and its for-profit subsidiary corporation `OpenAI Limited Partnership`.
|
|
> `OpenAI` conducts AI research with the declared intention of promoting and developing a friendly AI.
|
|
> `OpenAI` systems run on an `Azure`-based supercomputing platform from `Microsoft`.
|
|
|
|
>The [OpenAI API](https://platform.openai.com/docs/models) is powered by a diverse set of models with different capabilities and price points.
|
|
>
|
|
>[ChatGPT](https://chat.openai.com) is the Artificial Intelligence (AI) chatbot developed by `OpenAI`.
|
|
|
|
## Installation and Setup
|
|
- Install the Python SDK with
|
|
```bash
|
|
pip install openai
|
|
```
|
|
- Get an OpenAI api key and set it as an environment variable (`OPENAI_API_KEY`)
|
|
- If you want to use OpenAI's tokenizer (only available for Python 3.9+), install it
|
|
```bash
|
|
pip install tiktoken
|
|
```
|
|
|
|
|
|
## LLM
|
|
|
|
See a [usage example](/docs/integrations/llms/openai).
|
|
|
|
```python
|
|
from langchain.llms import OpenAI
|
|
```
|
|
|
|
If you are using a model hosted on `Azure`, you should use different wrapper for that:
|
|
```python
|
|
from langchain.llms import AzureOpenAI
|
|
```
|
|
For a more detailed walkthrough of the `Azure` wrapper, see [here](/docs/integrations/llms/azure_openai_example)
|
|
|
|
|
|
## Chat model
|
|
|
|
See a [usage example](/docs/integrations/chat_models/openai).
|
|
|
|
```python
|
|
from langchain.chat_models import ChatOpenAI
|
|
```
|
|
|
|
If you are using a model hosted on `Azure`, you should use different wrapper for that:
|
|
```python
|
|
from langchain.llms import AzureChatOpenAI
|
|
```
|
|
For a more detailed walkthrough of the `Azure` wrapper, see [here](/docs/integrations/chat_models/azure_openai)
|
|
|
|
|
|
## Text Embedding Model
|
|
|
|
See a [usage example](/docs/integrations/text_embedding/openai)
|
|
|
|
```python
|
|
from langchain.embeddings import OpenAIEmbeddings
|
|
```
|
|
|
|
|
|
## Tokenizer
|
|
|
|
There are several places you can use the `tiktoken` tokenizer. By default, it is used to count tokens
|
|
for OpenAI LLMs.
|
|
|
|
You can also use it to count tokens when splitting documents with
|
|
```python
|
|
from langchain.text_splitter import CharacterTextSplitter
|
|
CharacterTextSplitter.from_tiktoken_encoder(...)
|
|
```
|
|
For a more detailed walkthrough of this, see [this notebook](/docs/modules/data_connection/document_transformers/text_splitters/tiktoken)
|
|
|
|
## Document Loader
|
|
|
|
See a [usage example](/docs/integrations/document_loaders/chatgpt_loader).
|
|
|
|
```python
|
|
from langchain.document_loaders.chatgpt import ChatGPTLoader
|
|
```
|
|
|
|
## Retriever
|
|
|
|
See a [usage example](/docs/integrations/retrievers/chatgpt-plugin).
|
|
|
|
```python
|
|
from langchain.retrievers import ChatGPTPluginRetriever
|
|
```
|
|
|
|
## Chain
|
|
|
|
See a [usage example](/docs/guides/safety/moderation).
|
|
|
|
```python
|
|
from langchain.chains import OpenAIModerationChain
|
|
```
|
|
|