bump version to 0017 (#161)

harrison/reorg_smart_chains^2
Harrison Chase 2 years ago committed by GitHub
parent a19ad935b3
commit 243211a5ae
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

@ -62,7 +62,7 @@
"source": [
"### PromptTemplate\n",
"\n",
"This is the most simple type of prompt - a string template that takes any number of input variables. The template should be formatted as a Python f-string, although we will support other formats (Jinja, Mako, etc) in the future. \n",
"This is the most simple type of prompt template, consisting of a string template that takes any number of input variables. The template should be formatted as a Python f-string, although we will support other formats (Jinja, Mako, etc) in the future. \n",
"\n",
"If you just want to use a hardcoded prompt template, you should use this implementation.\n",
"\n",

@ -10,19 +10,23 @@
"It is often preferrable to store prompts not as python code but as files. This can make it easy to share, store, and version prompts. This notebook covers how to do that in LangChain, walking through all the different types of prompts and the different serialization options.\n",
"\n",
"At a high level, the following design principles are applied to serialization:\n",
"\n",
"1. Both JSON and YAML are supported. We want to support serialization methods are human readable on disk, and YAML and JSON are two of the most popular methods for that. Note that this rule applies to prompts. For other assets, like Examples, different serialization methods may be supported.\n",
"2. We support specifying everything in one file, or storing different components (templates, examples, etc) in different files and referencing them. For some cases, storing everything in file makes the most sense, but for others it is preferrable to split up some of the assets (long templates, large examples, reusable components). LangChain supports both."
"\n",
"2. We support specifying everything in one file, or storing different components (templates, examples, etc) in different files and referencing them. For some cases, storing everything in file makes the most sense, but for others it is preferrable to split up some of the assets (long templates, large examples, reusable components). LangChain supports both.\n",
"\n",
"There is also a single entry point to load prompts from disk, making it easy to load any type of prompt."
]
},
{
"cell_type": "code",
"execution_count": 13,
"execution_count": 1,
"id": "2c8d7587",
"metadata": {},
"outputs": [],
"source": [
"# All prompts are loading through the `load_prompt` function.\n",
"from langchain.prompts.loading import load_prompt"
"# All prompts are loaded through the `load_prompt` function.\n",
"from langchain.prompts import load_prompt"
]
},
{
@ -46,7 +50,7 @@
},
{
"cell_type": "code",
"execution_count": 15,
"execution_count": 2,
"id": "2d6e5117",
"metadata": {},
"outputs": [
@ -67,7 +71,7 @@
},
{
"cell_type": "code",
"execution_count": 28,
"execution_count": 3,
"id": "4f4ca686",
"metadata": {},
"outputs": [
@ -95,7 +99,7 @@
},
{
"cell_type": "code",
"execution_count": 16,
"execution_count": 4,
"id": "510def23",
"metadata": {},
"outputs": [
@ -125,7 +129,7 @@
},
{
"cell_type": "code",
"execution_count": 17,
"execution_count": 5,
"id": "5547760d",
"metadata": {},
"outputs": [
@ -143,7 +147,7 @@
},
{
"cell_type": "code",
"execution_count": 18,
"execution_count": 6,
"id": "9cb13ac5",
"metadata": {},
"outputs": [
@ -164,7 +168,7 @@
},
{
"cell_type": "code",
"execution_count": 36,
"execution_count": 7,
"id": "762cb4bf",
"metadata": {},
"outputs": [
@ -202,7 +206,7 @@
},
{
"cell_type": "code",
"execution_count": 24,
"execution_count": 8,
"id": "b21f5b95",
"metadata": {},
"outputs": [
@ -232,7 +236,7 @@
},
{
"cell_type": "code",
"execution_count": 19,
"execution_count": 9,
"id": "e2bec0fc",
"metadata": {},
"outputs": [
@ -263,7 +267,7 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 10,
"id": "98c8f356",
"metadata": {},
"outputs": [
@ -300,7 +304,7 @@
},
{
"cell_type": "code",
"execution_count": 20,
"execution_count": 11,
"id": "9d996a86",
"metadata": {},
"outputs": [
@ -328,7 +332,7 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 12,
"id": "dd2c10bb",
"metadata": {},
"outputs": [
@ -365,7 +369,7 @@
},
{
"cell_type": "code",
"execution_count": 21,
"execution_count": 13,
"id": "6cd781ef",
"metadata": {},
"outputs": [
@ -396,7 +400,7 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 14,
"id": "533ab8a7",
"metadata": {},
"outputs": [
@ -433,7 +437,7 @@
},
{
"cell_type": "code",
"execution_count": 22,
"execution_count": 15,
"id": "0b6dd7b8",
"metadata": {},
"outputs": [
@ -454,7 +458,7 @@
},
{
"cell_type": "code",
"execution_count": 23,
"execution_count": 16,
"id": "76a1065d",
"metadata": {},
"outputs": [
@ -479,7 +483,7 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 17,
"id": "744d275d",
"metadata": {},
"outputs": [

@ -1 +1 @@
0.0.16
0.0.17

@ -18,7 +18,12 @@ from langchain.chains import (
)
from langchain.docstore import Wikipedia
from langchain.llms import Cohere, HuggingFaceHub, OpenAI
from langchain.prompts import BasePromptTemplate, PromptTemplate
from langchain.prompts import (
BasePromptTemplate,
FewShotPromptTemplate,
Prompt,
PromptTemplate,
)
from langchain.sql_database import SQLDatabase
from langchain.vectorstores import FAISS, ElasticVectorSearch
@ -31,7 +36,8 @@ __all__ = [
"Cohere",
"OpenAI",
"BasePromptTemplate",
"DynamicPrompt",
"Prompt",
"FewShotPromptTemplate",
"PromptTemplate",
"ReActChain",
"Wikipedia",

@ -2,11 +2,12 @@
from langchain.prompts.base import BasePromptTemplate
from langchain.prompts.few_shot import FewShotPromptTemplate
from langchain.prompts.loading import load_prompt
from langchain.prompts.prompt import PromptTemplate
from langchain.prompts.prompt import Prompt, PromptTemplate
__all__ = [
"BasePromptTemplate",
"load_prompt",
"PromptTemplate",
"FewShotPromptTemplate",
"Prompt",
]

Loading…
Cancel
Save