mirror of
https://github.com/hwchase17/langchain
synced 2024-11-11 19:11:02 +00:00
a50eabbd48
- Description: Modified the prompt created by the function `create_unstructured_prompt` (which is called for LLMs that do not support function calling) by adding conditional checks that verify if restrictions on entity types and rel_types should be added to the prompt. If the user provides a sufficiently large text, the current prompt **may** fail to produce results in some LLMs. I have first seen this issue when I implemented a custom LLM class that did not support Function Calling and used Gemini 1.5 Pro, but I was able to replicate this issue using OpenAI models. By loading a sufficiently large text ```python from langchain_community.llms import Ollama from langchain_openai import ChatOpenAI, OpenAI from langchain_core.prompts import PromptTemplate import re from langchain_experimental.graph_transformers import LLMGraphTransformer from langchain_core.documents import Document with open("texto-longo.txt", "r") as file: full_text = file.read() partial_text = full_text[:4000] documents = [Document(page_content=partial_text)] # cropped to fit GPT 3.5 context window ``` And using the chat class (that has function calling) ```python chat_openai = ChatOpenAI(model="gpt-3.5-turbo", model_kwargs={"seed": 42}) chat_gpt35_transformer = LLMGraphTransformer(llm=chat_openai) graph_from_chat_gpt35 = chat_gpt35_transformer.convert_to_graph_documents(documents) ``` It works: ``` >>> print(graph_from_chat_gpt35[0].nodes) [Node(id="Jesu, Joy of Man's Desiring", type='Music'), Node(id='Godel', type='Person'), Node(id='Johann Sebastian Bach', type='Person'), Node(id='clever way of encoding the complicated expressions as numbers', type='Concept')] ``` But if you try to use the non-chat LLM class (that does not support function calling) ```python openai = OpenAI( model="gpt-3.5-turbo-instruct", max_tokens=1000, ) gpt35_transformer = LLMGraphTransformer(llm=openai) graph_from_gpt35 = gpt35_transformer.convert_to_graph_documents(documents) ``` It uses the prompt that has issues and sometimes does not produce any result ``` >>> print(graph_from_gpt35[0].nodes) [] ``` After implementing the changes, I was able to use both classes more consistently: ```shell >>> chat_gpt35_transformer = LLMGraphTransformer(llm=chat_openai) >>> graph_from_chat_gpt35 = chat_gpt35_transformer.convert_to_graph_documents(documents) >>> print(graph_from_chat_gpt35[0].nodes) [Node(id="Jesu, Joy Of Man'S Desiring", type='Music'), Node(id='Johann Sebastian Bach', type='Person'), Node(id='Godel', type='Person')] >>> gpt35_transformer = LLMGraphTransformer(llm=openai) >>> graph_from_gpt35 = gpt35_transformer.convert_to_graph_documents(documents) >>> print(graph_from_gpt35[0].nodes) [Node(id='I', type='Pronoun'), Node(id="JESU, JOY OF MAN'S DESIRING", type='Song'), Node(id='larger memory', type='Memory'), Node(id='this nice tree structure', type='Structure'), Node(id='how you can do it all with the numbers', type='Process'), Node(id='JOHANN SEBASTIAN BACH', type='Composer'), Node(id='type of structure', type='Characteristic'), Node(id='that', type='Pronoun'), Node(id='we', type='Pronoun'), Node(id='worry', type='Verb')] ``` The results are a little inconsistent because the GPT 3.5 model may produce incomplete json due to the token limit, but that could be solved (or mitigated) by checking for a complete json when parsing it. |
||
---|---|---|
.. | ||
langchain_experimental | ||
scripts | ||
tests | ||
extended_testing_deps.txt | ||
LICENSE | ||
Makefile | ||
poetry.lock | ||
poetry.toml | ||
pyproject.toml | ||
README.md |
🦜️🧪 LangChain Experimental
This package holds experimental LangChain code, intended for research and experimental uses.
Warning
Portions of the code in this package may be dangerous if not properly deployed in a sandboxed environment. Please be wary of deploying experimental code to production unless you've taken appropriate precautions and have already discussed it with your security team.
Some of the code here may be marked with security notices. However, given the exploratory and experimental nature of the code in this package, the lack of a security notice on a piece of code does not mean that the code in question does not require additional security considerations in order to be safe to use.