mirror of
https://github.com/hwchase17/langchain
synced 2024-11-10 01:10:59 +00:00
a2cc9b55ba
Causes an issue for this code ```python from langchain.chat_models.openai import ChatOpenAI from langchain.output_parsers.openai_tools import JsonOutputToolsParser from langchain.schema import SystemMessage prompt = SystemMessage(content="You are a nice assistant.") + "{question}" llm = ChatOpenAI( model_kwargs={ "tools": [ { "type": "function", "function": { "name": "web_search", "description": "Searches the web for the answer to the question.", "parameters": { "type": "object", "properties": { "query": { "type": "string", "description": "The question to search for.", }, }, }, }, } ], }, streaming=True, ) parser = JsonOutputToolsParser(first_tool_only=True) llm_chain = prompt | llm | parser | (lambda x: x) for chunk in llm_chain.stream({"question": "tell me more about turtles"}): print(chunk) # message = llm_chain.invoke({"question": "tell me more about turtles"}) # print(message) ``` Instead by definition, we'll assume that RunnableLambdas consume the entire stream and that if the stream isn't addable then it's the last message of the stream that's in the usable format. --- If users want to use addable dicts, they can wrap the dict in an AddableDict class. --- Likely, need to follow up with the same change for other places in the code that do the upgrade |
||
---|---|---|
.. | ||
_api | ||
beta | ||
callbacks | ||
document_loaders | ||
documents | ||
embeddings | ||
example_selectors | ||
language_models | ||
load | ||
messages | ||
output_parsers | ||
outputs | ||
prompts | ||
pydantic_v1 | ||
runnables | ||
tracers | ||
utils | ||
__init__.py | ||
agents.py | ||
caches.py | ||
chat_history.py | ||
chat_sessions.py | ||
env.py | ||
exceptions.py | ||
globals.py | ||
memory.py | ||
prompt_values.py | ||
py.typed | ||
retrievers.py | ||
stores.py | ||
sys_info.py | ||
tools.py | ||
vectorstores.py |