langchain/docs/modules
jerwelborn 55efbb8a7e
pydantic/json parsing (#1722)
```
class Joke(BaseModel):
    setup: str = Field(description="question to set up a joke")
    punchline: str = Field(description="answer to resolve the joke")

joke_query = "Tell me a joke."

# Or, an example with compound type fields.
#class FloatArray(BaseModel):
#    values: List[float] = Field(description="list of floats")
#
#float_array_query = "Write out a few terms of fiboacci."

model = OpenAI(model_name='text-davinci-003', temperature=0.0)
parser = PydanticOutputParser(pydantic_object=Joke)
prompt = PromptTemplate(
    template="Answer the user query.\n{format_instructions}\n{query}\n",
    input_variables=["query"],
    partial_variables={"format_instructions": parser.get_format_instructions()}
)

_input = prompt.format_prompt(query=joke_query)
print("Prompt:\n", _input.to_string())
output = model(_input.to_string())
print("Completion:\n", output)
parsed_output = parser.parse(output)
print("Parsed completion:\n", parsed_output)
```

```
Prompt:
 Answer the user query.
The output should be formatted as a JSON instance that conforms to the JSON schema below.  For example, the object {"foo":  ["bar", "baz"]} conforms to the schema {"foo": {"description": "a list of strings field", "type": "string"}}.

Here is the output schema:
---
{"setup": {"description": "question to set up a joke", "type": "string"}, "punchline": {"description": "answer to resolve the joke", "type": "string"}}
---

Tell me a joke.

Completion:
 {"setup": "Why don't scientists trust atoms?", "punchline": "Because they make up everything!"}

Parsed completion:
 setup="Why don't scientists trust atoms?" punchline='Because they make up everything!'
```

Ofc, works only with LMs of sufficient capacity. DaVinci is reliable but
not always.

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-03-16 21:43:11 -07:00
..
agents Harrison/convo agent (#1642) 2023-03-14 09:42:24 -07:00
chains Harrison/agent eval (#1620) 2023-03-14 12:37:48 -07:00
chat Adding ability to return_pl_id to all PromptLayer Models in LangChain (#1699) 2023-03-16 17:05:23 -07:00
document_loaders feat: allow the unstructured kwargs to be passed in to Unstructured document loaders (#1667) 2023-03-14 18:15:28 -07:00
indexes Fixed typo, clarified language (#1682) 2023-03-15 08:00:11 -07:00
llms Adding ability to return_pl_id to all PromptLayer Models in LangChain (#1699) 2023-03-16 17:05:23 -07:00
memory add docs for save/load messages (#1697) 2023-03-15 13:13:08 -07:00
prompts pydantic/json parsing (#1722) 2023-03-16 21:43:11 -07:00
utils Zapier Integration (#1654) 2023-03-14 23:06:17 -07:00
agents.rst Documentation: Minor typo fixes (#1327) 2023-02-27 14:40:43 -08:00
chains.rst Documentation: Minor typo fixes (#1327) 2023-02-27 14:40:43 -08:00
chat.rst (rfc) chat models (#1424) 2023-03-06 08:34:24 -08:00
document_loaders.rst Harrison/unstructured support (#903) 2023-02-05 23:02:07 -08:00
indexes.rst improve docs for indexes (#1146) 2023-02-19 23:14:50 -08:00
llms.rst Fix minor error in LLM documentation (#602) 2023-01-12 18:16:32 -08:00
memory.rst Harrison/memory refactor (#1478) 2023-03-07 07:59:37 -08:00
prompts.rst Feature: linkcheck-action (#534) (#542) 2023-01-04 21:39:50 -08:00
state_of_the_union.txt Docs refactor (#480) 2023-01-02 08:24:09 -08:00
utils.rst Feature: linkcheck-action (#534) (#542) 2023-01-04 21:39:50 -08:00