mirror of
https://github.com/hwchase17/langchain
synced 2024-11-13 19:10:52 +00:00
66966a6e72
Some additions in support of [predicted outputs](https://platform.openai.com/docs/guides/latency-optimization#use-predicted-outputs) feature: - Bump openai sdk version - Add integration test - Add example to integration docs The `prediction` kwarg is already plumbed through model invocation. |
||
---|---|---|
.. | ||
langchain_openai | ||
scripts | ||
tests | ||
.gitignore | ||
LICENSE | ||
Makefile | ||
poetry.lock | ||
pyproject.toml | ||
README.md |
langchain-openai
This package contains the LangChain integrations for OpenAI through their openai
SDK.
Installation and Setup
- Install the LangChain partner package
pip install langchain-openai
- Get an OpenAI api key and set it as an environment variable (
OPENAI_API_KEY
)
LLM
See a usage example.
from langchain_openai import OpenAI
If you are using a model hosted on Azure
, you should use different wrapper for that:
from langchain_openai import AzureOpenAI
For a more detailed walkthrough of the Azure
wrapper, see here
Chat model
See a usage example.
from langchain_openai import ChatOpenAI
If you are using a model hosted on Azure
, you should use different wrapper for that:
from langchain_openai import AzureChatOpenAI
For a more detailed walkthrough of the Azure
wrapper, see here
Text Embedding Model
See a usage example
from langchain_openai import OpenAIEmbeddings
If you are using a model hosted on Azure
, you should use different wrapper for that:
from langchain_openai import AzureOpenAIEmbeddings
For a more detailed walkthrough of the Azure
wrapper, see here