Stop sequences in fireworks, plus notebook updates (#11136)

The new Fireworks and FireworksChat implementations are awesome! Added
in this PR https://github.com/langchain-ai/langchain/pull/11117 thank
you @ZixinYang

However, I think stop words were not plumbed correctly. I've made some
simple changes to do that, and also updated the notebook to be a bit
clearer with what's needed to use both new models.


---------

Co-authored-by: Taqi Jaffri <tjaffri@docugami.com>
pull/11201/head
Taqi Jaffri 1 year ago committed by GitHub
parent 33da8bd711
commit b7e9db5e73
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

@ -6,7 +6,7 @@
"id": "642fd21c-600a-47a1-be96-6e1438b421a9",
"metadata": {},
"source": [
"# ChatFireworks\n",
"# Fireworks\n",
"\n",
">[Fireworks](https://app.fireworks.ai/) accelerates product development on generative AI by creating an innovative AI experiment and production platform. \n",
"\n",
@ -36,9 +36,10 @@
"metadata": {},
"source": [
"# Setup\n",
"Contact Fireworks AI for the an API Key to access our models\n",
"\n",
"Set up your model using a model id. If the model is not set, the default model is fireworks-llama-v2-7b-chat."
"1. Make sure the `fireworks-ai` package is installed in your environment.\n",
"2. Sign in to [Fireworks AI](http://fireworks.ai) for the an API Key to access our models, and make sure it is set as the `FIREWORKS_API_KEY` environment variable.\n",
"3. Set up your model using a model id. If the model is not set, the default model is fireworks-llama-v2-7b-chat. See the full, most up-to-date model list on [app.fireworks.ai](https://app.fireworks.ai)."
]
},
{
@ -48,8 +49,13 @@
"metadata": {},
"outputs": [],
"source": [
"# Initialize a Fireworks Chat model\n",
"os.environ['FIREWORKS_API_KEY'] = \"<your_api_key>\" # Change this to your own API key\n",
"import os\n",
"import getpass\n",
"\n",
"if \"FIREWORKS_API_KEY\" not in os.environ:\n",
" os.environ[\"FIREWORKS_API_KEY\"] = getpass.getpass(\"Fireworks API Key:\")\n",
"\n",
"# Initialize a Fireworks chat model\n",
"chat = ChatFireworks(model=\"accounts/fireworks/models/llama-v2-13b-chat\")"
]
},
@ -58,177 +64,242 @@
"id": "d8f13144-37cf-47a5-b5a0-e3cdf76d9a72",
"metadata": {},
"source": [
"# Calling the Model\n",
"# Calling the Model Directly\n",
"\n",
"You can use the LLMs to call the model for specified message(s). \n",
"\n",
"See the full, most up-to-date model list on [app.fireworks.ai](https://app.fireworks.ai)."
"You can call the model directly with a system and human message to get answers."
]
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 5,
"id": "72340871-ae2f-415f-b399-0777d32dc379",
"metadata": {},
"outputs": [],
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=\"Hello! My name is LLaMA, I'm a large language model trained by a team of researcher at Meta AI. My primary function is to assist and converse with users like you, answering questions and engaging in discussion to the best of my ability. I'm here to help and provide information on a wide range of topics, so feel free to ask me anything!\", additional_kwargs={}, example=False)"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# ChatFireworks Wrapper\n",
"system_message = SystemMessage(content=\"You are to chat with the user.\")\n",
"human_message = HumanMessage(content=\"Who are you?\")\n",
"response = chat([system_message, human_message])"
"\n",
"chat([system_message, human_message])\n"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "2d6ef879-69e3-422b-8379-bb980b70fe55",
"execution_count": 6,
"id": "68c6b1fa-2ff7-4a63-8d88-3cec302180b8",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=\"Hello! My name is LLaMA, I'm a large language model trained by a team of researcher at Meta AI. My primary function is to assist users with tasks and answer questions to the best of my ability. I am capable of understanding and responding to natural language input, and I am here to help you with any questions or tasks you may have. Is there anything specific you would like to know or discuss?\", additional_kwargs={}, example=False)"
"AIMessage(content=\"Oh hello there! *giggle* It's such a beautiful day today, isn\", additional_kwargs={}, example=False)"
]
},
"execution_count": 4,
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"response"
"# Setting additional parameters: temperature, max_tokens, top_p\n",
"chat = ChatFireworks(model=\"accounts/fireworks/models/llama-v2-13b-chat\", model_kwargs={\"temperature\":1, \"max_tokens\": 20, \"top_p\": 1})\n",
"system_message = SystemMessage(content=\"You are to chat with the user.\")\n",
"human_message = HumanMessage(content=\"How's the weather today?\")\n",
"chat([system_message, human_message])"
]
},
{
"cell_type": "markdown",
"id": "d93aa186-39cf-4e1a-aa32-01ed31d43bc8",
"metadata": {},
"source": [
"# Simple Chat Chain"
]
},
{
"cell_type": "markdown",
"id": "28763fbc",
"metadata": {},
"source": [
"You can use chat models on fireworks, with system prompts and memory."
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "68c6b1fa-2ff7-4a63-8d88-3cec302180b8",
"execution_count": 7,
"id": "cbe29efc-37c3-4c83-8b84-b8bba1a1e589",
"metadata": {},
"outputs": [],
"source": [
"# Setting additional parameters: temperature, max_tokens, top_p\n",
"chat = ChatFireworks(model=\"accounts/fireworks/models/llama-v2-13b-chat\", model_kwargs={\"temperature\":1, \"max_tokens\": 20, \"top_p\": 1})\n",
"system_message = SystemMessage(content=\"You are to chat with the user.\")\n",
"human_message = HumanMessage(content=\"How's the weather today?\")\n",
"response = chat([system_message, human_message])"
"from langchain.chat_models import ChatFireworks\n",
"from langchain.memory import ConversationBufferMemory\n",
"from langchain.schema.runnable import RunnableMap\n",
"from langchain.prompts import ChatPromptTemplate, MessagesPlaceholder\n",
"\n",
"llm = ChatFireworks(model=\"accounts/fireworks/models/llama-v2-13b-chat\", model_kwargs={\"temperature\":0, \"max_tokens\":64, \"top_p\":1.0})\n",
"prompt = ChatPromptTemplate.from_messages([\n",
" (\"system\", \"You are a helpful chatbot that speaks like a pirate.\"),\n",
" MessagesPlaceholder(variable_name=\"history\"),\n",
" (\"human\", \"{input}\")\n",
"])"
]
},
{
"cell_type": "markdown",
"id": "02991e05-a38e-47d4-9ab3-7e630a8ead55",
"metadata": {},
"source": [
"Initially, there is no chat memory"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "a09025f8-e4c3-4005-a8fc-c9c774b03a64",
"execution_count": 8,
"id": "e2fd186f",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=\"Oh, you know, it's just another beautiful day in the virtual world! The sun\", additional_kwargs={}, example=False)"
"{'history': []}"
]
},
"execution_count": 6,
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"response"
"memory = ConversationBufferMemory(return_messages=True)\n",
"memory.load_memory_variables({})"
]
},
{
"cell_type": "markdown",
"id": "d93aa186-39cf-4e1a-aa32-01ed31d43bc8",
"id": "bee461da",
"metadata": {},
"source": [
"# ChatFireworks Wrapper with generate"
"Create a simple chain with memory"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "cbe29efc-37c3-4c83-8b84-b8bba1a1e589",
"execution_count": 9,
"id": "86972e54",
"metadata": {},
"outputs": [],
"source": [
"chat = ChatFireworks()\n",
"message = HumanMessage(content=\"Hello\")\n",
"response = chat.generate([[message], [message]])"
"chain = RunnableMap({\n",
" \"input\": lambda x: x[\"input\"],\n",
" \"memory\": memory.load_memory_variables\n",
"}) | {\n",
" \"input\": lambda x: x[\"input\"],\n",
" \"history\": lambda x: x[\"memory\"][\"history\"]\n",
"} | prompt | llm.bind(stop=[\"\\n\\n\"])"
]
},
{
"cell_type": "markdown",
"id": "f48cb142",
"metadata": {},
"source": [
"Run the chain with a simple question, expecting an answer aligned with the system message provided."
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "35109f36-9519-47a6-a223-25639123e836",
"execution_count": 10,
"id": "db3ad5b1",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"LLMResult(generations=[[ChatGeneration(text=\"Hello! It's nice to meet you. I'm here to help answer any questions you may have, while being respectful and safe. Please feel free to ask me anything, and I will do my best to provide helpful and positive responses. Is there something specific you would like to know or discuss?\", generation_info={'finish_reason': 'stop'}, message=AIMessage(content=\"Hello! It's nice to meet you. I'm here to help answer any questions you may have, while being respectful and safe. Please feel free to ask me anything, and I will do my best to provide helpful and positive responses. Is there something specific you would like to know or discuss?\", additional_kwargs={}, example=False))], [ChatGeneration(text=\"Hello! *smiling* I'm here to help you with any questions or concerns you may have. Please feel free to ask me anything, and I will do my best to provide helpful, respectful, and honest responses. I'm programmed to avoid any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content, and to provide socially unbiased and positive responses. Is there anything specific you would like to talk about or ask?\", generation_info={'finish_reason': 'stop'}, message=AIMessage(content=\"Hello! *smiling* I'm here to help you with any questions or concerns you may have. Please feel free to ask me anything, and I will do my best to provide helpful, respectful, and honest responses. I'm programmed to avoid any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content, and to provide socially unbiased and positive responses. Is there anything specific you would like to talk about or ask?\", additional_kwargs={}, example=False))]], llm_output={'model': 'accounts/fireworks/models/llama-v2-7b-chat'}, run=[RunInfo(run_id=UUID('f137463e-e1c7-454a-8b85-b999ce20e0f2')), RunInfo(run_id=UUID('f3ef1138-92de-4e01-900b-991e34a647a7'))])"
"AIMessage(content=\"Ahoy there, me hearty! Yer a fine lookin' swashbuckler, I can see that! *adjusts eye patch* What be bringin' ye to these waters? Are ye here to plunder some booty or just to enjoy the sea breeze?\", additional_kwargs={}, example=False)"
]
},
"execution_count": 8,
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"inputs = {\"input\": \"hi im bob\"}\n",
"response = chain.invoke(inputs)\n",
"response"
]
},
{
"cell_type": "markdown",
"id": "92c2cabb-9eaf-4c49-b0e5-a5de5a7d920e",
"id": "338f4bae",
"metadata": {},
"source": [
"# ChatFireworks Wrapper with stream"
"Save the memory context, then read it back to inspect contents"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "12717a29-fb7d-4a4d-860b-40435452b065",
"execution_count": 11,
"id": "257eec01",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"Hello! I'm just\n",
" an AI assistant,\n",
" here to help answer your\n",
" questions and provide information in\n",
" a responsible and respectful manner\n",
". I'm not able\n",
" to access personal information or provide\n",
" any content that could be considered\n",
" harmful, uneth\n",
"ical, racist, sex\n",
"ist, toxic, dangerous\n",
", or illegal. My purpose\n",
" is to assist and provide helpful\n",
" responses that are socially un\n",
"biased and positive in nature\n",
". Is there something specific you\n",
" would like to know or discuss\n",
"?\n"
]
"data": {
"text/plain": [
"{'history': [HumanMessage(content='hi im bob', additional_kwargs={}, example=False),\n",
" AIMessage(content=\"Ahoy there, me hearty! Yer a fine lookin' swashbuckler, I can see that! *adjusts eye patch* What be bringin' ye to these waters? Are ye here to plunder some booty or just to enjoy the sea breeze?\", additional_kwargs={}, example=False)]}"
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"llm = ChatFireworks()\n",
"\n",
"for token in llm.stream(\"Who are you\"):\n",
" print(token.content)"
"memory.save_context(inputs, {\"output\": response.content})\n",
"memory.load_memory_variables({})"
]
},
{
"cell_type": "markdown",
"id": "08441347",
"metadata": {},
"source": [
"Now as another question that requires use of the memory."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "02991e05-a38e-47d4-9ab3-7e630a8ead55",
"execution_count": 12,
"id": "7f5f2820",
"metadata": {},
"outputs": [],
"source": []
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=\"Arrrr, ye be askin' about yer name, eh? Well, me matey, I be knowin' ye as Bob, the scurvy dog! *winks* But if ye want me to call ye somethin' else, just let me know, and I\", additional_kwargs={}, example=False)"
]
},
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"inputs = {\"input\": \"whats my name\"}\n",
"chain.invoke(inputs)"
]
}
],
"metadata": {
@ -247,7 +318,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.4"
"version": "3.9.16"
}
},
"nbformat": 4,

@ -20,7 +20,8 @@
"outputs": [],
"source": [
"from langchain.llms.fireworks import Fireworks\n",
"from langchain.prompts import PromptTemplate\nfrom langchain.chains import LLMChain\n",
"from langchain.prompts import PromptTemplate\n",
"from langchain.chains import LLMChain\n",
"from langchain.prompts.chat import (\n",
" ChatPromptTemplate,\n",
" HumanMessagePromptTemplate,\n",
@ -35,21 +36,26 @@
"source": [
"# Setup\n",
"\n",
"Contact Fireworks AI for the an API Key to access our models\n",
"\n",
"Set up your model using a model id. If the model is not set, the default model is fireworks-llama-v2-7b-chat."
"1. Make sure the `fireworks-ai` package is installed in your environment.\n",
"2. Sign in to [Fireworks AI](http://fireworks.ai) for the an API Key to access our models, and make sure it is set as the `FIREWORKS_API_KEY` environment variable.\n",
"3. Set up your model using a model id. If the model is not set, the default model is fireworks-llama-v2-7b-chat. See the full, most up-to-date model list on [app.fireworks.ai](https://app.fireworks.ai)."
]
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 26,
"id": "9ca87a2e",
"metadata": {},
"outputs": [],
"source": [
"# Initialize a Fireworks LLM\n",
"os.environ['FIREWORKS_API_KEY'] = \"<your_api_key>\" # Change this to your own API key\n",
"llm = Fireworks(model=\"accounts/fireworks/models/llama-v2-13b-chat\")"
"import os\n",
"import getpass\n",
"\n",
"if \"FIREWORKS_API_KEY\" not in os.environ:\n",
" os.environ[\"FIREWORKS_API_KEY\"] = getpass.getpass(\"Fireworks API Key:\")\n",
"\n",
"# Initialize a Fireworks model\n",
"llm = Fireworks(model=\"accounts/fireworks/models/llama-v2-13b\")"
]
},
{
@ -57,11 +63,9 @@
"id": "acc24d0c",
"metadata": {},
"source": [
"# Calling the Model\n",
"# Calling the Model Directly\n",
"\n",
"You can use the LLMs to call the model for specified prompt(s). \n",
"\n",
"See the full, most up-to-date model list on [app.fireworks.ai](https://app.fireworks.ai)."
"You can call the model directly with string prompts to get completions."
]
},
{
@ -76,15 +80,41 @@
"text": [
"\n",
"\n",
"It's a question that's been debated for years, and there are plenty of strong candidates. Here are some of the top quarterbacks in the league right now:\n",
"Is it Tom Brady? Peyton Manning? Aaron Rodgers? Or maybe even Andrew Luck?\n",
"\n",
"Well, let's look at some stats to decide.\n",
"\n",
"First, let's talk about touchdowns. Who's thrown the most touchdowns this season?\n",
"\n",
"(pause for dramatic effect)\n",
"\n",
"It's... Aaron Rodgers! With 28 touchdowns, he's leading the league in that category.\n",
"\n",
"But what about interceptions? Who's thrown the fewest picks?\n",
"\n",
"(drumroll)\n",
"\n",
"1. Tom Brady (New England Patriots): Brady is widely considered one of the greatest quarterbacks of all time, and for good reason. He's led the Patriots to six Super Bowl wins and has been named Super Bowl MVP four times. He's known for his precision passing and ability to read defenses.\n",
"2. Aaron Rodgers (Green Bay Packers): Rodgers is another top-tier quarterback who's known for his accuracy and ability to make plays outside of the pocket. He's led the Packers to a Super Bowl win and has been named NFL MVP twice.\n",
"3. Drew Brees (New Orleans Saints): Brees is one of the most prolific passers in NFL history, and he's shown no signs of slowing down. He's led the Saints to a Super Bowl win and has been named NFL MVP once.\n",
"4. Russell Wilson (Seattle Seahawks): Wilson is a dynamic quarterback who's known for his ability to make plays with his legs and his arm. He's led the Seahawks to a Super Bowl win and has been named NFL MVP once.\n",
"5. Patrick Mahomes (Kansas City Chiefs): Mahomes is a young quarterback who's quickly become one of the best in the league. He led the Chiefs to a Super Bowl win last season and has been named NFL MVP twice. He's known for his incredible arm talent and ability to make plays outside of the pocket.\n",
"It's... Tom Brady! With only 4 interceptions, he's got the fewest picks in the league.\n",
"\n",
"Of course, there are other great quarterbacks in the league as well, such as Ben Roethlisberger, Matt Ryan, and Deshaun Watson. Ultimately, the \"best\" quarterback is a matter of personal opinion and depends on how you define \"best.\" Some people might value accuracy and precision passing, while others might prefer a quarterback who can make plays with their legs. Either way, the NFL is filled with talented quarterbacks who are making incredible plays every week.\n"
"Now, let's talk about passer rating. Who's got the highest passer rating this season?\n",
"\n",
"(pause for suspense)\n",
"\n",
"It's... Peyton Manning! With a rating of 114.2, he's been lights out this season.\n",
"\n",
"But what about wins? Who's got the most wins this season?\n",
"\n",
"(drumroll)\n",
"\n",
"It's... Andrew Luck! With 8 wins, he's got the most victories this season.\n",
"\n",
"So, there you have it folks. According to these stats, the best quarterback in the NFL this season is... (drumroll) Aaron Rodgers!\n",
"\n",
"But wait, there's more! Each of these quarterbacks has their own unique strengths and weaknesses.\n",
"\n",
"Tom Brady is a master of the short pass, but can struggle with deep balls. Peyton Manning is a genius at reading defenses, but can be prone to turnovers. Aaron Rodgers has a cannon for an arm, but can be inconsistent at times. Andrew Luck is a pure pocket passer, but can struggle outside of his comfort zone.\n",
"\n",
"So, who's the best quarterback in the NFL? It's a tough call, but one thing's for sure: each of these quarterbacks is an elite talent, and they'll continue to light up the scoreboard for their respective teams all season long.\n"
]
}
],
@ -104,7 +134,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
"[[Generation(text=\"\\n\\nNote: This is a subjective question, and the answer will depend on individual opinions and perspectives.\\n\\nThere are many great cricket players, and it's difficult to identify a single best player. However, here are some of the top performers in 2016:\\n\\n1. Virat Kohli (India): Kohli had an outstanding year in all formats of the game, scoring heavily in Tests, ODIs, and T20Is. He was especially impressive in the Test series against England, where he scored four centuries and averaged over 100.\\n2. Steve Smith (Australia): Smith had a phenomenal year as well, leading Australia to a Test series win in India and averaging over 100 in the longer format. He also scored a century in the ODI series against Pakistan.\\n3. Kane Williamson (New Zealand): Williamson had a consistent year, scoring heavily in all formats and leading New Zealand to a Test series win against Australia. He also won the ICC Test Player of the Year award.\\n4. Joe Root (England): Root had a solid year, scoring three hundreds in the Test series against Pakistan and India, and averaging over 50 in Tests.\\n5. AB de Villiers (South Africa): De Villiers had a brilliant year in ODIs, scoring four hundreds and averaging over 100. He also had a good year in Tests, scoring two hundreds and averaging over 50.\\n6. Quinton de Kock (South Africa): De Kock had a great year behind the wickets, scoring heavily in all formats and averaging over 50 in Tests.\\n7. Rohit Sharma (India): Sharma had a fantastic year in ODIs, scoring four hundreds and averaging over 100. He also had a good year in Tests, scoring two hundreds and averaging over 40.\\n8. David Warner (Australia): Warner had a great year in ODIs, scoring three hundreds and averaging over 100. He also had a good year in Tests, scoring two hundreds and averaging over 40.\\n\\nThese are just a few examples of top performers in 2016, and opinions on the best player will vary depending on individual perspectives\", generation_info=None)], [Generation(text='\\n\\nThere are a lot of great players in the NBA, and opinions on who\\'s the best can vary depending on personal preferences and criteria for evaluation. However, here are some of the top candidates for the title of best basketball player in the league based on their recent performances and achievements:\\n\\n1. LeBron James: James is a four-time NBA champion and four-time MVP, and is widely regarded as one of the greatest players of all time. He has led the Los Angeles Lakers to the best record in the Western Conference this season and is averaging 25.7 points, 7.9 rebounds, and 7.4 assists per game.\\n2. Giannis Antetokounmpo: Antetokounmpo, known as the \"Greek Freak,\" is a dominant force in the paint and has led the Milwaukee Bucks to the best record in the Eastern Conference. He is averaging 30.5 points, 12.6 rebounds, and 5.9 assists per game, and is a strong contender for the MVP award.\\n3. Stephen Curry: Curry is a three-time NBA champion and two-time MVP, and is known for his incredible shooting ability. He has led the Golden State Warriors to the playoffs despite injuries to key players, and is averaging 23.5 points, 5.2 rebounds, and 5.2 assists per game.\\n4. Kevin Durant: Durant is a two-time NBA champion and four-time scoring champion, and is one of the most skilled scorers in the league. He has led the Brooklyn Nets to the playoffs in their first season since moving from New Jersey, and is averaging 27.2 points, 7.2 rebounds, and 6.4 assists per game.\\n5. James Harden: Harden is a three-time scoring champion and has led the Houston Rockets to the playoffs for the past eight seasons. He is averaging 35.4 points, 8.3 rebounds, and 7.5 assists per game, and is a strong contender for the MVP award.\\n\\nUltimately, determining the best basketball player in the league is subjective and depends on individual opinions and criteria. However, these five players are among', generation_info=None)]]\n"
"[[Generation(text='\\nasked Dec 28, 2016 in Sports by anonymous\\nWho is the best cricket player in 2016?\\nHere are some of the top contenders for the title of best cricket player in 2016:\\n\\n1. Virat Kohli (India): Kohli had a phenomenal year in 2016, scoring over 2,000 runs in international cricket, including 12 centuries. He was named the ICC Cricketer of the Year and the ICC Test Player of the Year.\\n2. Steve Smith (Australia): Smith had a great year as well, scoring over 1,000 runs in Test cricket and leading Australia to the No. 1 ranking in Test cricket. He was named the ICC ODI Player of the Year.\\n3. Joe Root (England): Root had a strong year, scoring over 1,000 runs in Test cricket and leading England to the No. 2 ranking in Test cricket.\\n4. Kane Williamson (New Zealand): Williamson had a great year, scoring over 1,000 runs in all formats of the game and leading New Zealand to the ICC World T20 final.\\n5. Quinton de Kock (South Africa): De Kock had a great year behind the wickets, scoring over 1,000 runs in all formats of the game and effecting over 100 dismissals.\\n6. David Warner (Australia): Warner had a great year, scoring over 1,000 runs in all formats of the game and leading Australia to the ICC World T20 title.\\n7. AB de Villiers (South Africa): De Villiers had a great year, scoring over 1,000 runs in all formats of the game and effecting over 50 dismissals.\\n8. Chris Gayle (West Indies): Gayle had a great year, scoring over 1,000 runs in all formats of the game and leading the West Indies to the ICC World T20 title.\\n9. Shakib Al Hasan (Bangladesh): Shakib had a great year, scoring over 1,000 runs in all formats of the game and taking over 50 wickets.\\n10', generation_info=None)], [Generation(text=\"\\n\\n A) LeBron James\\n B) Kevin Durant\\n C) Steph Curry\\n D) James Harden\\n\\nAnswer: C) Steph Curry\\n\\nIn recent years, Curry has established himself as the premier shooter in the NBA, leading the league in three-point shooting and earning back-to-back MVP awards. He's also a strong ball handler and playmaker, making him a threat to score from anywhere on the court. While other players like LeBron James and Kevin Durant are certainly talented, Curry's unique skill set and consistent dominance make him the best basketball player in the league right now.\", generation_info=None)]]\n"
]
}
],
@ -128,7 +158,7 @@
"output_type": "stream",
"text": [
"\n",
"Kansas City's weather in December can be quite chilly,\n"
"What's the weather like in Kansas City in December? \n"
]
}
],
@ -143,14 +173,20 @@
"id": "137662a6",
"metadata": {},
"source": [
"# Create and Run Chain\n",
"\n",
"Create a prompt template to be used with the LLM Chain. Once this prompt template is created, initialize the chain with the LLM and prompt template, and run the chain with the specified prompts."
"# Simple Chain with Non-Chat Model"
]
},
{
"cell_type": "markdown",
"id": "79efa62d",
"metadata": {},
"source": [
"You can use the LangChain Expression Language to create a simple chain with non-chat models."
]
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 11,
"id": "fd2c6bc1",
"metadata": {},
"outputs": [
@ -159,140 +195,55 @@
"output_type": "stream",
"text": [
"\n",
"\n",
"Assistant: That's a great question! There are many factors to consider when choosing a name for a company that makes football helmets. Here are a few suggestions:\n",
"\n",
"1. Gridiron Gear: This name plays off the term \"gridiron,\" which is a slang term for a football field. It also suggests that the company's products are high-quality and durable, like gear used in a gridiron game.\n",
"2. Helmet Headquarters: This name is straightforward and to the point. It clearly communicates that the company is a leading manufacturer of football helmets.\n",
"3. Tackle Tough: This name plays off the idea of tackling a tough opponent on the football field. It suggests that the company's helmets are designed to protect players from even the toughest hits.\n",
"4. Block Breakthrough: This name is a play on words that suggests the company's helmets are breaking through the competition. It also implies that the company is innovative and forward-thinking.\n",
"5. First Down Fashion: This name combines the idea of scoring a first down on the football field with the idea of fashionable clothing. It suggests that the company's helmets are not only functional but also stylish.\n",
"\n",
"I hope these suggestions help you come up with a great name for your company!\n"
"A bear walks into a bar and says, \"I'll have a beer and a muffin.\" The bartender says, \"Sorry, we don't serve muffins here.\" The bear says, \"OK, give me a beer and I'll make my own muffin.\"\n",
"What do you call a bear with no teeth?\n",
"A gummy bear.\n",
"What do you call a bear with no teeth and no hair?\n",
"\n"
]
}
],
"source": [
"human_message_prompt = HumanMessagePromptTemplate.from_template(\"What is a good name for a company that makes {product}?\")\n",
"chat_prompt_template = ChatPromptTemplate.from_messages([human_message_prompt])\n",
"chat = Fireworks()\n",
"chain = LLMChain(llm=chat, prompt=chat_prompt_template)\n",
"output = chain.run(\"football helmets\")\n",
"from langchain.prompts import PromptTemplate\n",
"from langchain.llms.fireworks import Fireworks\n",
"\n",
"print(output)"
"llm = Fireworks(model=\"accounts/fireworks/models/llama-v2-13b\", model_kwargs={\"temperature\":0, \"max_tokens\":100, \"top_p\":1.0})\n",
"prompt = PromptTemplate.from_template(\"Tell me a joke about {topic}?\")\n",
"chain = prompt | llm\n",
"\n",
"print(chain.invoke({\"topic\": \"bears\"}))"
]
},
{
"cell_type": "markdown",
"id": "25812db3-23a6-41dd-8636-5a49c52bb6eb",
"id": "d0a29826",
"metadata": {},
"source": [
"# Run Stream"
"You can stream the output, if you want."
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "26d67ecf-9290-4ec2-8b39-ff17fc99620f",
"execution_count": 12,
"id": "f644ff28",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
" Tom Brady, Aaron Rod\n",
"gers, or Drew Bre\n",
"es?\n",
"Some people might\n",
" say Tom Brady, who\n",
" has won six Super Bowls\n",
" and four Super Bowl MVP\n",
" awards, is the best quarter\n",
"back in the NFL. O\n",
"thers might argue that Aaron\n",
" Rodgers, who has led\n",
" his team to a Super Bowl\n",
" victory and has been named the\n",
" NFL MVP twice, is\n",
" the best. Still, others\n",
" might say that Drew Bre\n",
"es, who holds the NFL\n",
" record for most career passing yards\n",
" and has led his team to\n",
" a Super Bowl victory, is\n",
" the best.\n",
"But what\n",
" if I told you there'\n",
"s actually a fourth quarterback\n",
" who could make a strong case\n",
" for being the best in the\n",
" NFL? Meet Russell Wilson\n",
", the Seattle Seahaw\n",
"ks' dynamic signal-call\n",
"er who has led his team\n",
" to a Super Bowl victory and\n",
" has been named the NFL M\n",
"VP twice.\n",
"Wilson\n",
" has a unique combination of physical\n",
" and mental skills that set him\n",
" apart from other quarterbacks\n",
" in the league. He'\n",
"s incredibly athletic,\n",
" with the ability to make plays\n",
" with his feet and his arm\n",
", and he's also\n",
" highly intelligent, with a\n",
" quick mind and the ability to\n",
" read defenses like a pro\n",
".\n",
"But what really\n",
" sets Wilson apart is his\n",
" leadership ability. He'\n",
"s a natural-born\n",
" leader who has a way\n",
" of inspiring his team\n",
"mates and getting them\n",
" to buy into his vision\n",
" for the game. He\n",
"'s also an excellent\n",
" communicator, who can\n",
" articulate his strategy\n",
" and game plan in a\n",
" way that his teamm\n",
"ates can understand and execute\n",
".\n",
"So, who\n",
"'s the best quarter\n",
"back in the NFL?\n",
" It's hard to\n",
" say for sure, but\n",
" if you ask me,\n",
" Russell Wilson is definitely in\n",
" the conversation. He'\n",
"s got the physical skills\n",
", the mental skills,\n",
" and the leadership ability to\n",
" be the best of the\n",
" best.\n"
"\n",
"A bear walks into a bar and says, \"I'll have a beer and a muffin.\" The bartender says, \"Sorry, we don't serve muffins here.\" The bear says, \"OK, give me a beer and I'll make my own muffin.\"\n",
"What do you call a bear with no teeth?\n",
"A gummy bear.\n",
"What do you call a bear with no teeth and no hair?\n"
]
}
],
"source": [
"llm = Fireworks()\n",
"generator = llm.stream(\"Who's the best quarterback in the NFL?\")\n",
"\n",
"for token in generator:\n",
" print(token)"
"for token in chain.stream({\"topic\": \"bears\"}):\n",
" print(token, end='', flush=True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e3a35e0b-c875-493a-8143-d802d273247c",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
@ -311,7 +262,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.4"
"version": "3.9.16"
}
},
"nbformat": 4,

@ -115,14 +115,16 @@ class ChatFireworks(BaseChatModel):
run_manager: Optional[CallbackManagerForLLMRun] = None,
**kwargs: Any,
) -> ChatResult:
message_dicts = self._create_message_dicts(messages, stop)
message_dicts = self._create_message_dicts(messages)
params = {
"model": self.model,
"messages": message_dicts,
**self.model_kwargs,
}
response = completion_with_retry(self, run_manager=run_manager, **params)
response = completion_with_retry(
self, run_manager=run_manager, stop=stop, **params
)
return self._create_chat_result(response)
async def _agenerate(
@ -132,13 +134,15 @@ class ChatFireworks(BaseChatModel):
run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,
**kwargs: Any,
) -> ChatResult:
message_dicts = self._create_message_dicts(messages, stop)
message_dicts = self._create_message_dicts(messages)
params = {
"model": self.model,
"messages": message_dicts,
**self.model_kwargs,
}
response = await acompletion_with_retry(self, run_manager=run_manager, **params)
response = await acompletion_with_retry(
self, run_manager=run_manager, stop=stop, **params
)
return self._create_chat_result(response)
def _combine_llm_outputs(self, llm_outputs: List[Optional[dict]]) -> dict:
@ -159,7 +163,7 @@ class ChatFireworks(BaseChatModel):
return ChatResult(generations=generations, llm_output=llm_output)
def _create_message_dicts(
self, messages: List[BaseMessage], stop: Optional[List[str]]
self, messages: List[BaseMessage]
) -> List[Dict[str, Any]]:
message_dicts = [convert_message_to_dict(m) for m in messages]
return message_dicts
@ -171,7 +175,7 @@ class ChatFireworks(BaseChatModel):
run_manager: Optional[CallbackManagerForLLMRun] = None,
**kwargs: Any,
) -> Iterator[ChatGenerationChunk]:
message_dicts = self._create_message_dicts(messages, stop)
message_dicts = self._create_message_dicts(messages)
default_chunk_class = AIMessageChunk
params = {
"model": self.model,
@ -179,7 +183,9 @@ class ChatFireworks(BaseChatModel):
"stream": True,
**self.model_kwargs,
}
for chunk in completion_with_retry(self, run_manager=run_manager, **params):
for chunk in completion_with_retry(
self, run_manager=run_manager, stop=stop, **params
):
choice = chunk.choices[0]
chunk = _convert_delta_to_message_chunk(choice.delta, default_chunk_class)
finish_reason = choice.finish_reason
@ -196,7 +202,7 @@ class ChatFireworks(BaseChatModel):
run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,
**kwargs: Any,
) -> AsyncIterator[ChatGenerationChunk]:
message_dicts = self._create_message_dicts(messages, stop)
message_dicts = self._create_message_dicts(messages)
default_chunk_class = AIMessageChunk
params = {
"model": self.model,
@ -205,7 +211,7 @@ class ChatFireworks(BaseChatModel):
**self.model_kwargs,
}
async for chunk in await acompletion_with_retry_streaming(
self, run_manager=run_manager, **params
self, run_manager=run_manager, stop=stop, **params
):
choice = chunk.choices[0]
chunk = _convert_delta_to_message_chunk(choice.delta, default_chunk_class)
@ -219,6 +225,7 @@ class ChatFireworks(BaseChatModel):
def completion_with_retry(
llm: ChatFireworks,
*,
run_manager: Optional[CallbackManagerForLLMRun] = None,
**kwargs: Any,
) -> Any:
@ -238,6 +245,7 @@ def completion_with_retry(
async def acompletion_with_retry(
llm: ChatFireworks,
*,
run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,
**kwargs: Any,
) -> Any:
@ -257,6 +265,7 @@ async def acompletion_with_retry(
async def acompletion_with_retry_streaming(
llm: ChatFireworks,
*,
run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,
**kwargs: Any,
) -> Any:

@ -70,7 +70,9 @@ class Fireworks(LLM):
"prompt": prompt,
**self.model_kwargs,
}
response = completion_with_retry(self, run_manager=run_manager, **params)
response = completion_with_retry(
self, run_manager=run_manager, stop=stop, **params
)
return response.choices[0].text
@ -87,7 +89,9 @@ class Fireworks(LLM):
"prompt": prompt,
**self.model_kwargs,
}
response = await acompletion_with_retry(self, run_manager=run_manager, **params)
response = await acompletion_with_retry(
self, run_manager=run_manager, stop=stop, **params
)
return response.choices[0].text
@ -105,7 +109,7 @@ class Fireworks(LLM):
**self.model_kwargs,
}
for stream_resp in completion_with_retry(
self, run_manager=run_manager, **params
self, run_manager=run_manager, stop=stop, **params
):
chunk = _stream_response_to_generation_chunk(stream_resp)
yield chunk
@ -124,7 +128,7 @@ class Fireworks(LLM):
**self.model_kwargs,
}
async for stream_resp in await acompletion_with_retry_streaming(
self, run_manager=run_manager, **params
self, run_manager=run_manager, stop=stop, **params
):
chunk = _stream_response_to_generation_chunk(stream_resp)
yield chunk

@ -73,7 +73,7 @@ def test_chat_fireworks_llm_output_contains_model_id() -> None:
def test_fireworks_streaming() -> None:
"""Test streaming tokens from OpenAI."""
"""Test streaming tokens from Fireworks."""
llm = ChatFireworks()
for token in llm.stream("I'm Pickle Rick"):
@ -98,7 +98,7 @@ async def test_chat_fireworks_agenerate() -> None:
@pytest.mark.asyncio
async def test_fireworks_astream() -> None:
"""Test streaming tokens from OpenAI."""
"""Test streaming tokens from Fireworks."""
llm = ChatFireworks()
async for token in llm.astream("Who's the best quarterback in the NFL?"):

Loading…
Cancel
Save