Fixed some grammatical and spelling errors (#10595)

Fixed some grammatical and spelling errors
pull/10505/head^2
Aashish Saini 10 months ago committed by GitHub
parent 5e50b89164
commit f9f1340208
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

@ -8,7 +8,7 @@ Head to [Integrations](/docs/integrations/memory/) for documentation on built-in
:::
One of the core utility classes underpinning most (if not all) memory modules is the `ChatMessageHistory` class.
This is a super lightweight wrapper which provides convenience methods for saving HumanMessages, AIMessages, and then fetching them all.
This is a super lightweight wrapper that provides convenience methods for saving HumanMessages, AIMessages, and then fetching them all.
You may want to use this class directly if you are managing memory outside of a chain.

@ -5,7 +5,7 @@ It is broken into two parts: installation and setup, and then references to spec
## Installation and Setup
- Install the Python SDK with `pip install predictionguard`
- Get an Prediction Guard access token (as described [here](https://docs.predictionguard.com/)) and set it as an environment variable (`PREDICTIONGUARD_TOKEN`)
- Get a Prediction Guard access token (as described [here](https://docs.predictionguard.com/)) and set it as an environment variable (`PREDICTIONGUARD_TOKEN`)
## LLM Wrapper
@ -49,7 +49,7 @@ Context: EVERY comment, DM + email suggestion has led us to this EXCITING announ
Exclusive Candle Box - $80
Monthly Candle Box - $45 (NEW!)
Scent of The Month Box - $28 (NEW!)
Head to stories to get ALLL the deets on each box! 👆 BONUS: Save 50% on your first box with code 50OFF! 🎉
Head to stories to get ALL the deets on each box! 👆 BONUS: Save 50% on your first box with code 50OFF! 🎉
Query: {query}
@ -97,4 +97,4 @@ llm_chain = LLMChain(prompt=prompt, llm=pgllm, verbose=True)
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.predict(question=question)
```
```

@ -225,7 +225,7 @@ class SmartLLMChain(Chain):
(
HumanMessagePromptTemplate,
"You are a resolved tasked with 1) finding which of "
f"the {self.n_ideas} anwer options the researcher thought was "
f"the {self.n_ideas} answer options the researcher thought was "
"best,2) improving that answer and 3) printing the answer in full. "
"Don't output anything for step 1 or 2, only the full answer in 3. "
"Let's work this out in a step by step way to be sure we have "

Loading…
Cancel
Save