mirror of
https://github.com/hwchase17/langchain
synced 2024-11-08 07:10:35 +00:00
e9baf9c134
Without the print on the `llm` call, the new user sees no visible effect when just getting started. The assumption here is the new user is running this in a new sandbox script file or repl via copy-paste.
743 B
743 B
Calling a LLM
The most basic building block of LangChain is calling an LLM on some input. Let's walk through a simple example of how to do this. For this purpose, let's pretend we are building a service that generates a company name based on what the company makes.
In order to do this, we first need to import the LLM wrapper.
from langchain.llms import OpenAI
We can then initialize the wrapper with any arguments. In this example, we probably want the outputs to be MORE random, so we'll initialize it with a HIGH temperature.
llm = OpenAI(temperature=0.9)
We can now call it on some input!
text = "What would be a good company name a company that makes colorful socks?"
print(llm(text))