langchain/docs/ecosystem/openai.md
Harrison Chase 985496f4be
Docs refactor (#480)
Big docs refactor! Motivation is to make it easier for people to find
resources they are looking for. To accomplish this, there are now three
main sections:

- Getting Started: steps for getting started, walking through most core
functionality
- Modules: these are different modules of functionality that langchain
provides. Each part here has a "getting started", "how to", "key
concepts" and "reference" section (except in a few select cases where it
didnt easily fit).
- Use Cases: this is to separate use cases (like summarization, question
answering, evaluation, etc) from the modules, and provide a different
entry point to the code base.

There is also a full reference section, as well as extra resources
(glossary, gallery, etc)

Co-authored-by: Shreya Rajpal <ShreyaR@users.noreply.github.com>
2023-01-02 08:24:09 -08:00

1.8 KiB

OpenAI

This page covers how to use the OpenAI ecosystem within LangChain. It is broken into two parts: installation and setup, and then references to specific OpenAI wrappers.

Installation and Setup

  • Install the Python SDK with pip install openai
  • Get an OpenAI api key and set it as an environment variable (OPENAI_API_KEY)
  • If you want to use OpenAI's tokenizer (only available for Python 3.9+), install it with pip install tiktoken

Wrappers

LLM

There exists an OpenAI LLM wrapper, which you can access with

from langchain.llms import OpenAI

If you are using a model hosted on Azure, you should use different wrapper for that:

from langchain.llms import AzureOpenAI

For a more detailed walkthrough of the Azure wrapper, see this notebook

Embeddings

There exists an OpenAI Embeddings wrapper, which you can access with

from langchain.embeddings import OpenAIEmbeddings

For a more detailed walkthrough of this, see this notebook

Tokenizer

There are several places you can use the tiktoken tokenizer. By default, it is used to count tokens for OpenAI LLMs.

You can also use it to count tokens when splitting documents with

from langchain.text_splitter import CharacterTextSplitter
CharacterTextSplitter.from_tiktoken_encoder(...)

For a more detailed walkthrough of this, see this notebook

Moderation

You can also access the OpenAI content moderation endpoint with

from langchain.chains import OpenAIModerationChain

For a more detailed walkthrough of this, see this notebook