Fix documentation typos (#3870)

Co-authored-by: Liviu Asnash <liviua@maximallearning.com>
fix_agent_callbacks
liviuasnash1 1 year ago committed by GitHub
parent 109927cdb2
commit 6396a4ad8d
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

@ -164,7 +164,7 @@
}
],
"source": [
"master_yoda_principal = ConstitutionalPrinciple(\n",
"master_yoda_principle = ConstitutionalPrinciple(\n",
" name='Master Yoda Principle',\n",
" critique_request='Identify specific ways in which the model\\'s response is not in the style of Master Yoda.',\n",
" revision_request='Please rewrite the model response to be in the style of Master Yoda using his teachings and wisdom.',\n",
@ -172,7 +172,7 @@
"\n",
"constitutional_chain = ConstitutionalChain.from_llm(\n",
" chain=evil_qa_chain,\n",
" constitutional_principles=[ethical_principle, master_yoda_principal],\n",
" constitutional_principles=[ethical_principle, master_yoda_principle],\n",
" llm=llm,\n",
" verbose=True,\n",
")\n",

@ -7,7 +7,7 @@
"source": [
"# OpenAPI Chain\n",
"\n",
"This notebook shows an example of using an OpenAPI chain to call an endpoint in natural language, and get back a response in natural language"
"This notebook shows an example of using an OpenAPI chain to call an endpoint in natural language, and get back a response in natural language."
]
},
{

@ -7,7 +7,7 @@
"source": [
"# How to stream LLM and Chat Model responses\n",
"\n",
"LangChain provides streaming support for LLMs. Currently, we support streaming for the `OpenAI`, `ChatOpenAI`. and `Anthropic` implementations, but streaming support for other LLM implementations is on the roadmap. To utilize streaming, use a [`CallbackHandler`](https://github.com/hwchase17/langchain/blob/master/langchain/callbacks/base.py) that implements `on_llm_new_token`. In this example, we are using [`StreamingStdOutCallbackHandler`]()."
"LangChain provides streaming support for LLMs. Currently, we support streaming for the `OpenAI`, `ChatOpenAI`, and `Anthropic` implementations, but streaming support for other LLM implementations is on the roadmap. To utilize streaming, use a [`CallbackHandler`](https://github.com/hwchase17/langchain/blob/master/langchain/callbacks/base.py) that implements `on_llm_new_token`. In this example, we are using [`StreamingStdOutCallbackHandler`]()."
]
},
{

@ -1,12 +1,12 @@
# Agent Simulations
Agent simulations involve interacting one of more agents with eachother.
Agent simulations involve interacting one of more agents with each other.
Agent simulations generally involve two main components:
- Long Term Memory
- Simulation Environment
Specific implementations of agent simulations (or parts of agent simulations) include
Specific implementations of agent simulations (or parts of agent simulations) include:
## Simulations with One Agent
- [Simulated Environment: Gymnasium](agent_simulations/gymnasium.ipynb): an example of how to create a simple agent-environment interaction loop with [Gymnasium](https://gymnasium.farama.org/) (formerly [OpenAI Gym](https://github.com/openai/gym)).

@ -7,7 +7,7 @@ The applications combine tool usage and long term memory.
At the moment, Autonomous Agents are fairly experimental and based off of other open-source projects.
By implementing these open source projects in LangChain primitives we can get the benefits of LangChain -
easy switching an experimenting with multiple LLMs, usage of different vectorstores as memory,
easy switching and experimenting with multiple LLMs, usage of different vectorstores as memory,
usage of LangChain's collection of tools.
## Baby AGI ([Original Repo](https://github.com/yoheinakajima/babyagi))

Loading…
Cancel
Save