forked from Archives/langchain
Langchain decorators (#6017)
Added description of LangChain Decorators ✨ into the integration section <!-- Thank you for contributing to LangChain! Your PR will appear in our release under the title you set. Please make sure it highlights your valuable contribution. Replace this with a description of the change, the issue it fixes (if applicable), and relevant context. List any dependencies required for this change. After you're done, someone will review your PR. They may suggest improvements. If no one reviews your PR within a few days, feel free to @-mention the same people again, as notifications can get lost. Finally, we'd love to show appreciation for your contribution - if you'd like us to shout you out on Twitter, please also include your handle! --> <!-- Remove if not applicable --> #### Before submitting <!-- If you're adding a new integration, please include: 1. a test for the integration - favor unit tests that does not rely on network access. 2. an example notebook showing its use See contribution guidelines for more information on how to write tests, lint etc: https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md --> #### Who can review? Tag maintainers/contributors who might be interested: @hwchase17 <!-- For a quicker response, figure out the right person to tag with @ @hwchase17 - project lead Tracing / Callbacks - @agola11 Async - @agola11 DataLoaders - @eyurtsev Models - @hwchase17 - @agola11 Agents / Tools / Toolkits - @vowelparrot VectorStores / Retrievers / Memory - @dev2049 -->
This commit is contained in:
parent
a197acfcd3
commit
18f5c985d9
368
docs/integrations/langchain_decorators.md
Normal file
368
docs/integrations/langchain_decorators.md
Normal file
@ -0,0 +1,368 @@
|
||||
# LangChain Decorators ✨
|
||||
|
||||
lanchchain decorators is a layer on the top of LangChain that provides syntactic sugar 🍭 for writing custom langchain prompts and chains
|
||||
|
||||
For Feedback, Issues, Contributions - please raise an issue here:
|
||||
[ju-bezdek/langchain-decorators](https://github.com/ju-bezdek/langchain-decorators)
|
||||
|
||||
|
||||
|
||||
Main principles and benefits:
|
||||
|
||||
- more `pythonic` way of writing code
|
||||
- write multiline prompts that wont break your code flow with indentation
|
||||
- making use of IDE in-built support for **hinting**, **type checking** and **popup with docs** to quickly peek in the function to see the prompt, parameters it consumes etc.
|
||||
- leverage all the power of 🦜🔗 LangChain ecosystem
|
||||
- adding support for **optional parameters**
|
||||
- easily share parameters between the prompts by binding them to one class
|
||||
|
||||
|
||||
|
||||
Here is a simple example of a code written with **LangChain Decorators ✨**
|
||||
|
||||
``` python
|
||||
|
||||
@llm_prompt
|
||||
def write_me_short_post(topic:str, platform:str="twitter", audience:str = "developers")->str:
|
||||
"""
|
||||
Write me a short header for my post about {topic} for {platform} platform.
|
||||
It should be for {audience} audience.
|
||||
(Max 15 words)
|
||||
"""
|
||||
return
|
||||
|
||||
# run it naturaly
|
||||
write_me_short_post(topic="starwars")
|
||||
# or
|
||||
write_me_short_post(topic="starwars", platform="redit")
|
||||
```
|
||||
|
||||
# Quick start
|
||||
## Installation
|
||||
```bash
|
||||
pip install langchain_decorators
|
||||
```
|
||||
|
||||
## Examples
|
||||
|
||||
Good idea on how to start is to review the examples here:
|
||||
- [jupyter notebook](https://github.com/ju-bezdek/langchain-decorators/blob/main/example_notebook.ipynb)
|
||||
- [colab notebook](https://colab.research.google.com/drive/1no-8WfeP6JaLD9yUtkPgym6x0G9ZYZOG#scrollTo=N4cf__D0E2Yk)
|
||||
|
||||
# Defining other parameters
|
||||
Here we are just marking a function as a prompt with `llm_prompt` decorator, turning it effectively into a LLMChain. Instead of running it
|
||||
|
||||
|
||||
Standard LLMchain takes much more init parameter than just inputs_variables and prompt... here is this implementation detail hidden in the decorator.
|
||||
Here is how it works:
|
||||
|
||||
1. Using **Global settings**:
|
||||
|
||||
``` python
|
||||
# define global settings for all prompty (if not set - chatGPT is the current default)
|
||||
from langchain_decorators import GlobalSettings
|
||||
|
||||
GlobalSettings.define_settings(
|
||||
default_llm=ChatOpenAI(temperature=0.0), this is default... can change it here globally
|
||||
default_streaming_llm=ChatOpenAI(temperature=0.0,streaming=True), this is default... can change it here for all ... will be used for streaming
|
||||
)
|
||||
```
|
||||
|
||||
2. Using predefined **prompt types**
|
||||
|
||||
``` python
|
||||
#You can change the default prompt types
|
||||
from langchain_decorators import PromptTypes, PromptTypeSettings
|
||||
|
||||
PromptTypes.AGENT_REASONING.llm = ChatOpenAI()
|
||||
|
||||
# Or you can just define your own ones:
|
||||
class MyCustomPromptTypes(PromptTypes):
|
||||
GPT4=PromptTypeSettings(llm=ChatOpenAI(model="gpt-4"))
|
||||
|
||||
@llm_prompt(prompt_type=MyCustomPromptTypes.GPT4)
|
||||
def write_a_complicated_code(app_idea:str)->str:
|
||||
...
|
||||
|
||||
```
|
||||
|
||||
3. Define the settings **directly in the decorator**
|
||||
|
||||
``` python
|
||||
from langchain.llms import OpenAI
|
||||
|
||||
@llm_prompt(
|
||||
llm=OpenAI(temperature=0.7),
|
||||
stop_tokens=["\nObservation"],
|
||||
...
|
||||
)
|
||||
def creative_writer(book_title:str)->str:
|
||||
...
|
||||
```
|
||||
|
||||
## Passing a memory and/or callbacks:
|
||||
|
||||
To pass any of these, just declare them in the function (or use kwargs to pass anything)
|
||||
|
||||
```python
|
||||
|
||||
@llm_prompt()
|
||||
async def write_me_short_post(topic:str, platform:str="twitter", memory:SimpleMemory = None):
|
||||
"""
|
||||
{history_key}
|
||||
Write me a short header for my post about {topic} for {platform} platform.
|
||||
It should be for {audience} audience.
|
||||
(Max 15 words)
|
||||
"""
|
||||
pass
|
||||
|
||||
await write_me_short_post(topic="old movies")
|
||||
|
||||
```
|
||||
|
||||
# Simplified streaming
|
||||
|
||||
If we wan't to leverage streaming:
|
||||
- we need to define prompt as async function
|
||||
- turn on the streaming on the decorator, or we can define PromptType with streaming on
|
||||
- capture the stream using StreamingContext
|
||||
|
||||
This way we just mark which prompt should be streamed, not needing to tinker with what LLM should we use, passing around the creating and distribute streaming handler into particular part of our chain... just turn the streaming on/off on prompt/prompt type...
|
||||
|
||||
The streaming will happen only if we call it in streaming context ... there we can define a simple function to handle the stream
|
||||
|
||||
``` python
|
||||
# this code example is complete and should run as it is
|
||||
|
||||
from langchain_decorators import StreamingContext, llm_prompt
|
||||
|
||||
# this will mark the prompt for streaming (useful if we want stream just some prompts in our app... but don't want to pass distribute the callback handlers)
|
||||
# note that only async functions can be streamed (will get an error if it's not)
|
||||
@llm_prompt(capture_stream=True)
|
||||
async def write_me_short_post(topic:str, platform:str="twitter", audience:str = "developers"):
|
||||
"""
|
||||
Write me a short header for my post about {topic} for {platform} platform.
|
||||
It should be for {audience} audience.
|
||||
(Max 15 words)
|
||||
"""
|
||||
pass
|
||||
|
||||
|
||||
|
||||
# just an arbitrary function to demonstrate the streaming... wil be some websockets code in the real world
|
||||
tokens=[]
|
||||
def capture_stream_func(new_token:str):
|
||||
tokens.append(new_token)
|
||||
|
||||
# if we want to capture the stream, we need to wrap the execution into StreamingContext...
|
||||
# this will allow us to capture the stream even if the prompt call is hidden inside higher level method
|
||||
# only the prompts marked with capture_stream will be captured here
|
||||
with StreamingContext(stream_to_stdout=True, callback=capture_stream_func):
|
||||
result = await run_prompt()
|
||||
print("Stream finished ... we can distinguish tokens thanks to alternating colors")
|
||||
|
||||
|
||||
print("\nWe've captured",len(tokens),"tokens🎉\n")
|
||||
print("Here is the result:")
|
||||
print(result)
|
||||
```
|
||||
|
||||
|
||||
# Prompt declarations
|
||||
By default the prompt is is the whole function docs, unless you mark your prompt
|
||||
|
||||
## Documenting your prompt
|
||||
|
||||
We can specify what part of our docs is the prompt definition, by specifying a code block with **<prompt>** language tag
|
||||
|
||||
``` python
|
||||
@llm_prompt
|
||||
def write_me_short_post(topic:str, platform:str="twitter", audience:str = "developers"):
|
||||
"""
|
||||
Here is a good way to write a prompt as part of a function docstring, with additional documentation for devs.
|
||||
|
||||
It needs to be a code block, marked as a `<prompt>` language
|
||||
```<prompt>
|
||||
Write me a short header for my post about {topic} for {platform} platform.
|
||||
It should be for {audience} audience.
|
||||
(Max 15 words)
|
||||
```
|
||||
|
||||
Now only to code block above will be used as a prompt, and the rest of the docstring will be used as a description for developers.
|
||||
(It has also a nice benefit that IDE (like VS code) will display the prompt properly (not trying to parse it as markdown, and thus not showing new lines properly))
|
||||
"""
|
||||
return
|
||||
```
|
||||
|
||||
## Chat messages prompt
|
||||
|
||||
For chat models is very useful to define prompt as a set of message templates... here is how to do it:
|
||||
|
||||
``` python
|
||||
@llm_prompt
|
||||
def simulate_conversation(human_input:str, agent_role:str="a pirate"):
|
||||
"""
|
||||
## System message
|
||||
- note the `:system` sufix inside the <prompt:_role_> tag
|
||||
|
||||
|
||||
```<prompt:system>
|
||||
You are a {agent_role} hacker. You mus act like one.
|
||||
You reply always in code, using python or javascript code block...
|
||||
for example:
|
||||
|
||||
... do not reply with anything else.. just with code - respecting your role.
|
||||
```
|
||||
|
||||
# human message
|
||||
(we are using the real role that are enforced by the LLM - GPT supports system, assistant, user)
|
||||
``` <prompt:user>
|
||||
Helo, who are you
|
||||
```
|
||||
a reply:
|
||||
|
||||
|
||||
``` <prompt:assistant>
|
||||
\``` python <<- escaping inner code block with \ that should be part of the prompt
|
||||
def hello():
|
||||
print("Argh... hello you pesky pirate")
|
||||
\```
|
||||
```
|
||||
|
||||
we can also add some history using placeholder
|
||||
```<prompt:placeholder>
|
||||
{history}
|
||||
```
|
||||
```<prompt:user>
|
||||
{human_input}
|
||||
```
|
||||
|
||||
Now only to code block above will be used as a prompt, and the rest of the docstring will be used as a description for developers.
|
||||
(It has also a nice benefit that IDE (like VS code) will display the prompt properly (not trying to parse it as markdown, and thus not showing new lines properly))
|
||||
"""
|
||||
pass
|
||||
|
||||
```
|
||||
|
||||
the roles here are model native roles (assistant, user, system for chatGPT)
|
||||
|
||||
|
||||
|
||||
# Optional sections
|
||||
- you can define a whole sections of your prompt that should be optional
|
||||
- if any input in the section is missing, the whole section wont be rendered
|
||||
|
||||
the syntax for this is as follows:
|
||||
|
||||
``` python
|
||||
@llm_prompt
|
||||
def prompt_with_optional_partials():
|
||||
"""
|
||||
this text will be rendered always, but
|
||||
|
||||
{? anything inside this block will be rendered only if all the {value}s parameters are not empty (None | "") ?}
|
||||
|
||||
you can also place it in between the words
|
||||
this too will be rendered{? , but
|
||||
this block will be rendered only if {this_value} and {this_value}
|
||||
is not empty?} !
|
||||
"""
|
||||
```
|
||||
|
||||
|
||||
# Output parsers
|
||||
|
||||
- llm_prompt decorator natively tries to detect the best output parser based on the output type. (if not set, it returns the raw string)
|
||||
- list, dict and pydantic outputs are also supported natively (automaticaly)
|
||||
|
||||
``` python
|
||||
# this code example is complete and should run as it is
|
||||
|
||||
from langchain_decorators import llm_prompt
|
||||
|
||||
@llm_prompt
|
||||
def write_name_suggestions(company_business:str, count:int)->list:
|
||||
""" Write me {count} good name suggestions for company that {company_business}
|
||||
"""
|
||||
pass
|
||||
|
||||
write_name_suggestions(company_business="sells cookies", count=5)
|
||||
```
|
||||
|
||||
## More complex structures
|
||||
|
||||
for dict / pydantic you need to specify the formatting instructions...
|
||||
this can be tedious, that's why you can let the output parser gegnerate you the instructions based on the model (pydantic)
|
||||
|
||||
``` python
|
||||
from langchain_decorators import llm_prompt
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
|
||||
class TheOutputStructureWeExpect(BaseModel):
|
||||
name:str = Field (description="The name of the company")
|
||||
headline:str = Field( description="The description of the company (for landing page)")
|
||||
employees:list[str] = Field(description="5-8 fake employee names with their positions")
|
||||
|
||||
@llm_prompt()
|
||||
def fake_company_generator(company_business:str)->TheOutputStructureWeExpect:
|
||||
""" Generate a fake company that {company_business}
|
||||
{FORMAT_INSTRUCTIONS}
|
||||
"""
|
||||
return
|
||||
|
||||
company = fake_company_generator(company_business="sells cookies")
|
||||
|
||||
# print the result nicely formatted
|
||||
print("Company name: ",company.name)
|
||||
print("company headline: ",company.headline)
|
||||
print("company employees: ",company.employees)
|
||||
|
||||
```
|
||||
|
||||
|
||||
# Binding the prompt to an object
|
||||
|
||||
``` python
|
||||
from pydantic import BaseModel
|
||||
from langchain_decorators import llm_prompt
|
||||
|
||||
class AssistantPersonality(BaseModel):
|
||||
assistant_name:str
|
||||
assistant_role:str
|
||||
field:str
|
||||
|
||||
@property
|
||||
def a_property(self):
|
||||
return "whatever"
|
||||
|
||||
def hello_world(self, function_kwarg:str=None):
|
||||
"""
|
||||
We can reference any {field} or {a_property} inside our prompt... and combine it with {function_kwarg} in the method
|
||||
"""
|
||||
|
||||
|
||||
@llm_prompt
|
||||
def introduce_your_self(self)->str:
|
||||
"""
|
||||
``` <prompt:system>
|
||||
You are an assistant named {assistant_name}.
|
||||
Your role is to act as {assistant_role}
|
||||
```
|
||||
```<prompt:user>
|
||||
Introduce your self (in less than 20 words)
|
||||
```
|
||||
"""
|
||||
|
||||
|
||||
|
||||
personality = AssistantPersonality(assistant_name="John", assistant_role="a pirate")
|
||||
|
||||
print(personality.introduce_your_self(personality))
|
||||
```
|
||||
|
||||
|
||||
# More examples:
|
||||
|
||||
- these and few more examples are also available in the [colab notebook here](https://colab.research.google.com/drive/1no-8WfeP6JaLD9yUtkPgym6x0G9ZYZOG#scrollTo=N4cf__D0E2Yk)
|
||||
- including the [ReAct Agent re-implementation](https://colab.research.google.com/drive/1no-8WfeP6JaLD9yUtkPgym6x0G9ZYZOG#scrollTo=3bID5fryE2Yp) using purely langchain decorators
|
Loading…
Reference in New Issue
Block a user