a50eabbd48
- Description: Modified the prompt created by the function `create_unstructured_prompt` (which is called for LLMs that do not support function calling) by adding conditional checks that verify if restrictions on entity types and rel_types should be added to the prompt. If the user provides a sufficiently large text, the current prompt **may** fail to produce results in some LLMs. I have first seen this issue when I implemented a custom LLM class that did not support Function Calling and used Gemini 1.5 Pro, but I was able to replicate this issue using OpenAI models. By loading a sufficiently large text ```python from langchain_community.llms import Ollama from langchain_openai import ChatOpenAI, OpenAI from langchain_core.prompts import PromptTemplate import re from langchain_experimental.graph_transformers import LLMGraphTransformer from langchain_core.documents import Document with open("texto-longo.txt", "r") as file: full_text = file.read() partial_text = full_text[:4000] documents = [Document(page_content=partial_text)] # cropped to fit GPT 3.5 context window ``` And using the chat class (that has function calling) ```python chat_openai = ChatOpenAI(model="gpt-3.5-turbo", model_kwargs={"seed": 42}) chat_gpt35_transformer = LLMGraphTransformer(llm=chat_openai) graph_from_chat_gpt35 = chat_gpt35_transformer.convert_to_graph_documents(documents) ``` It works: ``` >>> print(graph_from_chat_gpt35[0].nodes) [Node(id="Jesu, Joy of Man's Desiring", type='Music'), Node(id='Godel', type='Person'), Node(id='Johann Sebastian Bach', type='Person'), Node(id='clever way of encoding the complicated expressions as numbers', type='Concept')] ``` But if you try to use the non-chat LLM class (that does not support function calling) ```python openai = OpenAI( model="gpt-3.5-turbo-instruct", max_tokens=1000, ) gpt35_transformer = LLMGraphTransformer(llm=openai) graph_from_gpt35 = gpt35_transformer.convert_to_graph_documents(documents) ``` It uses the prompt that has issues and sometimes does not produce any result ``` >>> print(graph_from_gpt35[0].nodes) [] ``` After implementing the changes, I was able to use both classes more consistently: ```shell >>> chat_gpt35_transformer = LLMGraphTransformer(llm=chat_openai) >>> graph_from_chat_gpt35 = chat_gpt35_transformer.convert_to_graph_documents(documents) >>> print(graph_from_chat_gpt35[0].nodes) [Node(id="Jesu, Joy Of Man'S Desiring", type='Music'), Node(id='Johann Sebastian Bach', type='Person'), Node(id='Godel', type='Person')] >>> gpt35_transformer = LLMGraphTransformer(llm=openai) >>> graph_from_gpt35 = gpt35_transformer.convert_to_graph_documents(documents) >>> print(graph_from_gpt35[0].nodes) [Node(id='I', type='Pronoun'), Node(id="JESU, JOY OF MAN'S DESIRING", type='Song'), Node(id='larger memory', type='Memory'), Node(id='this nice tree structure', type='Structure'), Node(id='how you can do it all with the numbers', type='Process'), Node(id='JOHANN SEBASTIAN BACH', type='Composer'), Node(id='type of structure', type='Characteristic'), Node(id='that', type='Pronoun'), Node(id='we', type='Pronoun'), Node(id='worry', type='Verb')] ``` The results are a little inconsistent because the GPT 3.5 model may produce incomplete json due to the token limit, but that could be solved (or mitigated) by checking for a complete json when parsing it. |
||
---|---|---|
.devcontainer | ||
.github | ||
cookbook | ||
docker | ||
docs | ||
libs | ||
scripts | ||
templates | ||
.gitattributes | ||
.gitignore | ||
.readthedocs.yaml | ||
CITATION.cff | ||
LICENSE | ||
Makefile | ||
MIGRATE.md | ||
poetry.lock | ||
poetry.toml | ||
pyproject.toml | ||
README.md | ||
SECURITY.md |
🦜️🔗 LangChain
⚡ Build context-aware reasoning applications ⚡
Looking for the JS/TS library? Check out LangChain.js.
To help you ship LangChain apps to production faster, check out LangSmith. LangSmith is a unified developer platform for building, testing, and monitoring LLM applications. Fill out this form to speak with our sales team.
Quick Install
With pip:
pip install langchain
With conda:
conda install langchain -c conda-forge
🤔 What is LangChain?
LangChain is a framework for developing applications powered by large language models (LLMs).
For these applications, LangChain simplifies the entire application lifecycle:
- Open-source libraries: Build your applications using LangChain's open-source building blocks, components, and third-party integrations. Use LangGraph to build stateful agents with first-class streaming and human-in-the-loop support.
- Productionization: Inspect, monitor, and evaluate your apps with LangSmith so that you can constantly optimize and deploy with confidence.
- Deployment: Turn your LangGraph applications into production-ready APIs and Assistants with LangGraph Cloud.
Open-source libraries
langchain-core
: Base abstractions and LangChain Expression Language.langchain-community
: Third party integrations.- Some integrations have been further split into partner packages that only rely on
langchain-core
. Examples includelangchain_openai
andlangchain_anthropic
.
- Some integrations have been further split into partner packages that only rely on
langchain
: Chains, agents, and retrieval strategies that make up an application's cognitive architecture.LangGraph
: A library for building robust and stateful multi-actor applications with LLMs by modeling steps as edges and nodes in a graph. Integrates smoothly with LangChain, but can be used without it.
Productionization:
- LangSmith: A developer platform that lets you debug, test, evaluate, and monitor chains built on any LLM framework and seamlessly integrates with LangChain.
Deployment:
- LangGraph Cloud: Turn your LangGraph applications into production-ready APIs and Assistants.
🧱 What can you build with LangChain?
❓ Question answering with RAG
- Documentation
- End-to-end Example: Chat LangChain and repo
🧱 Extracting structured output
- Documentation
- End-to-end Example: SQL Llama2 Template
🤖 Chatbots
- Documentation
- End-to-end Example: Web LangChain (web researcher chatbot) and repo
And much more! Head to the Tutorials section of the docs for more.
🚀 How does LangChain help?
The main value props of the LangChain libraries are:
- Components: composable building blocks, tools and integrations for working with language models. Components are modular and easy-to-use, whether you are using the rest of the LangChain framework or not
- Off-the-shelf chains: built-in assemblages of components for accomplishing higher-level tasks
Off-the-shelf chains make it easy to get started. Components make it easy to customize existing chains and build new ones.
LangChain Expression Language (LCEL)
LCEL is the foundation of many of LangChain's components, and is a declarative way to compose chains. LCEL was designed from day 1 to support putting prototypes in production, with no code changes, from the simplest “prompt + LLM” chain to the most complex chains.
- Overview: LCEL and its benefits
- Interface: The standard Runnable interface for LCEL objects
- Primitives: More on the primitives LCEL includes
- Cheatsheet: Quick overview of the most common usage patterns
Components
Components fall into the following modules:
📃 Model I/O
This includes prompt management, prompt optimization, a generic interface for chat models and LLMs, and common utilities for working with model outputs.
📚 Retrieval
Retrieval Augmented Generation involves loading data from a variety of sources, preparing it, then searching over (a.k.a. retrieving from) it for use in the generation step.
🤖 Agents
Agents allow an LLM autonomy over how a task is accomplished. Agents make decisions about which Actions to take, then take that Action, observe the result, and repeat until the task is complete. LangChain provides a standard interface for agents, along with LangGraph for building custom agents.
📖 Documentation
Please see here for full documentation, which includes:
- Introduction: Overview of the framework and the structure of the docs.
- Tutorials: If you're looking to build something specific or are more of a hands-on learner, check out our tutorials. This is the best place to get started.
- How-to guides: Answers to “How do I….?” type questions. These guides are goal-oriented and concrete; they're meant to help you complete a specific task.
- Conceptual guide: Conceptual explanations of the key parts of the framework.
- API Reference: Thorough documentation of every class and method.
🌐 Ecosystem
- 🦜🛠️ LangSmith: Trace and evaluate your language model applications and intelligent agents to help you move from prototype to production.
- 🦜🕸️ LangGraph: Create stateful, multi-actor applications with LLMs. Integrates smoothly with LangChain, but can be used without it.
- 🦜🏓 LangServe: Deploy LangChain runnables and chains as REST APIs.
💁 Contributing
As an open-source project in a rapidly developing field, we are extremely open to contributions, whether it be in the form of a new feature, improved infrastructure, or better documentation.
For detailed information on how to contribute, see here.