forked from Archives/langchain
a7e09d46c5
Use the following code to test: ```python import os from langchain.llms import OpenAI from langchain.chains.api import podcast_docs from langchain.chains import APIChain # Get api key here: https://openai.com/pricing os.environ["OPENAI_API_KEY"] = "sk-xxxxx" # Get api key here: https://www.listennotes.com/api/pricing/ listen_api_key = 'xxx' llm = OpenAI(temperature=0) headers = {"X-ListenAPI-Key": listen_api_key} chain = APIChain.from_llm_and_api_docs(llm, podcast_docs.PODCAST_DOCS, headers=headers, verbose=True) chain.run("Search for 'silicon valley bank' podcast episodes, audio length is more than 30 minutes, return only 1 results") ``` Known issues: the api response data might be too big, and we'll get such error: `openai.error.InvalidRequestError: This model's maximum context length is 4097 tokens, however you requested 6733 tokens (6477 in your prompt; 256 for the completion). Please reduce your prompt; or completion length.` |
||
---|---|---|
.. | ||
agents | ||
chains | ||
chat | ||
document_loaders | ||
indexes | ||
llms | ||
memory | ||
prompts | ||
utils | ||
agents.rst | ||
chains.rst | ||
chat.rst | ||
document_loaders.rst | ||
indexes.rst | ||
llms.rst | ||
memory.rst | ||
prompts.rst | ||
state_of_the_union.txt | ||
utils.rst |