mirror of
https://github.com/hwchase17/langchain
synced 2024-11-04 06:00:26 +00:00
docs: deprecation of OpenAI functions agent, astream_events docstring (#18164)
Co-authored-by: Hershenson, Isaac (Extern) <isaac.hershenson.extern@bayer04.de> Co-authored-by: Bagatur <baskaryan@gmail.com>
This commit is contained in:
parent
b0ccaf5917
commit
733367b795
@ -17,6 +17,19 @@
|
||||
"source": [
|
||||
"# OpenAI functions\n",
|
||||
"\n",
|
||||
":::{.callout-caution}\n",
|
||||
"\n",
|
||||
"OpenAI API has deprecated `functions` in favor of `tools`. The difference between the two is that the `tools` API allows the model to request that multiple functions be invoked at once, which can reduce response times in some architectures. It's recommended to use the tools agent for OpenAI models.\n",
|
||||
"\n",
|
||||
"See the following links for more information:\n",
|
||||
"\n",
|
||||
"[OpenAI Tools](./openai_tools)\n",
|
||||
"\n",
|
||||
"[OpenAI chat create](https://platform.openai.com/docs/api-reference/chat/create)\n",
|
||||
"\n",
|
||||
"[OpenAI function calling](https://platform.openai.com/docs/guides/function-calling)\n",
|
||||
":::\n",
|
||||
"\n",
|
||||
"Certain OpenAI models (like gpt-3.5-turbo-0613 and gpt-4-0613) have been fine-tuned to detect when a function should be called and respond with the inputs that should be passed to the function. In an API call, you can describe functions and have the model intelligently choose to output a JSON object containing arguments to call those functions. The goal of the OpenAI Function APIs is to more reliably return valid and useful function calls than a generic text completion or chat API.\n",
|
||||
"\n",
|
||||
"A number of open source models have adopted the same format for function calls and have also fine-tuned the model to detect when a function should be called.\n",
|
||||
@ -25,19 +38,7 @@
|
||||
"\n",
|
||||
"Install `openai`, `tavily-python` packages which are required as the LangChain packages call them internally.\n",
|
||||
"\n",
|
||||
"\n",
|
||||
":::info\n",
|
||||
"\n",
|
||||
"OpenAI API has deprecated `functions` in favor of `tools`. The difference between the two is that the `tools` API allows the model to request that multiple functions be invoked at once, which can reduce response times in some architectures. It's recommended to use the tools agent for OpenAI models.\n",
|
||||
"\n",
|
||||
"See the following links for more information:\n",
|
||||
"\n",
|
||||
"[OpenAI chat create](https://platform.openai.com/docs/api-reference/chat/create)\n",
|
||||
"\n",
|
||||
"[OpenAI function calling](https://platform.openai.com/docs/guides/function-calling)\n",
|
||||
":::\n",
|
||||
"\n",
|
||||
":::tip\n",
|
||||
":::{.callout-tip}\n",
|
||||
"The `functions` format remains relevant for open source models and providers that have adopted it, and this agent is expected to work for such models.\n",
|
||||
":::\n"
|
||||
]
|
||||
|
@ -106,17 +106,17 @@ class Runnable(Generic[Input, Output], ABC):
|
||||
Key Methods
|
||||
===========
|
||||
|
||||
* invoke/ainvoke: Transforms a single input into an output.
|
||||
* batch/abatch: Efficiently transforms multiple inputs into outputs.
|
||||
* stream/astream: Streams output from a single input as it's produced.
|
||||
* astream_log: Streams output and selected intermediate results from an input.
|
||||
- **invoke/ainvoke**: Transforms a single input into an output.
|
||||
- **batch/abatch**: Efficiently transforms multiple inputs into outputs.
|
||||
- **stream/astream**: Streams output from a single input as it's produced.
|
||||
- **astream_log**: Streams output and selected intermediate results from an input.
|
||||
|
||||
Built-in optimizations:
|
||||
|
||||
* Batch: By default, batch runs invoke() in parallel using a thread pool executor.
|
||||
- **Batch**: By default, batch runs invoke() in parallel using a thread pool executor.
|
||||
Override to optimize batching.
|
||||
|
||||
* Async: Methods with "a" suffix are asynchronous. By default, they execute
|
||||
- **Async**: Methods with "a" suffix are asynchronous. By default, they execute
|
||||
the sync counterpart using asyncio's thread pool.
|
||||
Override for native async.
|
||||
|
||||
@ -228,7 +228,7 @@ class Runnable(Generic[Input, Output], ABC):
|
||||
)
|
||||
|
||||
For a UI (and much more) checkout LangSmith: https://docs.smith.langchain.com/
|
||||
"""
|
||||
""" # noqa: E501
|
||||
|
||||
name: Optional[str] = None
|
||||
"""The name of the runnable. Used for debugging and tracing."""
|
||||
@ -707,78 +707,97 @@ class Runnable(Generic[Input, Output], ABC):
|
||||
) -> AsyncIterator[StreamEvent]:
|
||||
"""Generate a stream of events.
|
||||
|
||||
Use to create an iterator ove StreamEvents that provide real-time information
|
||||
Use to create an iterator over StreamEvents that provide real-time information
|
||||
about the progress of the runnable, including StreamEvents from intermediate
|
||||
results.
|
||||
|
||||
A StreamEvent is a dictionary with the following schema:
|
||||
|
||||
* ``event``: str - Event names are of the
|
||||
- ``event``: **str** - Event names are of the
|
||||
format: on_[runnable_type]_(start|stream|end).
|
||||
* ``name``: str - The name of the runnable that generated the event.
|
||||
* ``run_id``: str - randomly generated ID associated with the given execution of
|
||||
- ``name``: **str** - The name of the runnable that generated the event.
|
||||
- ``run_id``: **str** - randomly generated ID associated with the given execution of
|
||||
the runnable that emitted the event.
|
||||
A child runnable that gets invoked as part of the execution of a
|
||||
parent runnable is assigned its own unique ID.
|
||||
* ``tags``: Optional[List[str]] - The tags of the runnable that generated
|
||||
- ``tags``: **Optional[List[str]]** - The tags of the runnable that generated
|
||||
the event.
|
||||
* ``metadata``: Optional[Dict[str, Any]] - The metadata of the runnable
|
||||
- ``metadata``: **Optional[Dict[str, Any]]** - The metadata of the runnable
|
||||
that generated the event.
|
||||
* ``data``: Dict[str, Any]
|
||||
- ``data``: **Dict[str, Any]**
|
||||
|
||||
|
||||
Below is a table that illustrates some evens that might be emitted by various
|
||||
chains. Metadata fields have been omitted from the table for brevity.
|
||||
Chain definitions have been included after the table.
|
||||
|
||||
+----------------------+------------------+---------------------------------+-----------------------------------------------+-------------------------------------------------+
|
||||
| event | name | chunk | input | output |
|
||||
|----------------------|------------------|---------------------------------|-----------------------------------------------|-------------------------------------------------|
|
||||
+======================+==================+=================================+===============================================+=================================================+
|
||||
| on_chat_model_start | [model name] | | {"messages": [[SystemMessage, HumanMessage]]} | |
|
||||
+----------------------+------------------+---------------------------------+-----------------------------------------------+-------------------------------------------------+
|
||||
| on_chat_model_stream | [model name] | AIMessageChunk(content="hello") | | |
|
||||
+----------------------+------------------+---------------------------------+-----------------------------------------------+-------------------------------------------------+
|
||||
| on_chat_model_end | [model name] | | {"messages": [[SystemMessage, HumanMessage]]} | {"generations": [...], "llm_output": None, ...} |
|
||||
+----------------------+------------------+---------------------------------+-----------------------------------------------+-------------------------------------------------+
|
||||
| on_llm_start | [model name] | | {'input': 'hello'} | |
|
||||
+----------------------+------------------+---------------------------------+-----------------------------------------------+-------------------------------------------------+
|
||||
| on_llm_stream | [model name] | 'Hello' | | |
|
||||
| on_llm_end | [model name] | | 'Hello human!' |
|
||||
+----------------------+------------------+---------------------------------+-----------------------------------------------+-------------------------------------------------+
|
||||
| on_llm_end | [model name] | | 'Hello human!' | |
|
||||
+----------------------+------------------+---------------------------------+-----------------------------------------------+-------------------------------------------------+
|
||||
| on_chain_start | format_docs | | | |
|
||||
+----------------------+------------------+---------------------------------+-----------------------------------------------+-------------------------------------------------+
|
||||
| on_chain_stream | format_docs | "hello world!, goodbye world!" | | |
|
||||
+----------------------+------------------+---------------------------------+-----------------------------------------------+-------------------------------------------------+
|
||||
| on_chain_end | format_docs | | [Document(...)] | "hello world!, goodbye world!" |
|
||||
+----------------------+------------------+---------------------------------+-----------------------------------------------+-------------------------------------------------+
|
||||
| on_tool_start | some_tool | | {"x": 1, "y": "2"} | |
|
||||
+----------------------+------------------+---------------------------------+-----------------------------------------------+-------------------------------------------------+
|
||||
| on_tool_stream | some_tool | {"x": 1, "y": "2"} | | |
|
||||
+----------------------+------------------+---------------------------------+-----------------------------------------------+-------------------------------------------------+
|
||||
| on_tool_end | some_tool | | | {"x": 1, "y": "2"} |
|
||||
+----------------------+------------------+---------------------------------+-----------------------------------------------+-------------------------------------------------+
|
||||
| on_retriever_start | [retriever name] | | {"query": "hello"} | |
|
||||
+----------------------+------------------+---------------------------------+-----------------------------------------------+-------------------------------------------------+
|
||||
| on_retriever_chunk | [retriever name] | {documents: [...]} | | |
|
||||
+----------------------+------------------+---------------------------------+-----------------------------------------------+-------------------------------------------------+
|
||||
| on_retriever_end | [retriever name] | | {"query": "hello"} | {documents: [...]} |
|
||||
+----------------------+------------------+---------------------------------+-----------------------------------------------+-------------------------------------------------+
|
||||
| on_prompt_start | [template_name] | | {"question": "hello"} | |
|
||||
+----------------------+------------------+---------------------------------+-----------------------------------------------+-------------------------------------------------+
|
||||
| on_prompt_end | [template_name] | | {"question": "hello"} | ChatPromptValue(messages: [SystemMessage, ...]) |
|
||||
+----------------------+------------------+---------------------------------+-----------------------------------------------+-------------------------------------------------+
|
||||
|
||||
Here are declarations associated with the events shown above:
|
||||
|
||||
`format_docs`:
|
||||
|
||||
```python
|
||||
def format_docs(docs: List[Document]) -> str:
|
||||
'''Format the docs.'''
|
||||
return ", ".join([doc.page_content for doc in docs])
|
||||
.. code-block:: python
|
||||
|
||||
format_docs = RunnableLambda(format_docs)
|
||||
```
|
||||
def format_docs(docs: List[Document]) -> str:
|
||||
'''Format the docs.'''
|
||||
return ", ".join([doc.page_content for doc in docs])
|
||||
|
||||
format_docs = RunnableLambda(format_docs)
|
||||
|
||||
`some_tool`:
|
||||
|
||||
```python
|
||||
@tool
|
||||
def some_tool(x: int, y: str) -> dict:
|
||||
'''Some_tool.'''
|
||||
return {"x": x, "y": y}
|
||||
```
|
||||
.. code-block:: python
|
||||
|
||||
@tool
|
||||
def some_tool(x: int, y: str) -> dict:
|
||||
'''Some_tool.'''
|
||||
return {"x": x, "y": y}
|
||||
|
||||
`prompt`:
|
||||
|
||||
```python
|
||||
template = ChatPromptTemplate.from_messages(
|
||||
[("system", "You are Cat Agent 007"), ("human", "{question}")]
|
||||
).with_config({"run_name": "my_template", "tags": ["my_template"]})
|
||||
```
|
||||
.. code-block:: python
|
||||
|
||||
template = ChatPromptTemplate.from_messages(
|
||||
[("system", "You are Cat Agent 007"), ("human", "{question}")]
|
||||
).with_config({"run_name": "my_template", "tags": ["my_template"]})
|
||||
|
||||
|
||||
Example:
|
||||
|
||||
|
Loading…
Reference in New Issue
Block a user