You cannot select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
langchain/libs/partners/fireworks
ccurme 9ec7151317
fireworks: fix integration tests (#20973)
2 months ago
..
langchain_fireworks docs, multiple: de-beta with_structured_output (#20850) 2 months ago
scripts fireworks[patch]: Add Fireworks partner packages (#17694) 4 months ago
tests fireworks: fix integration tests (#20973) 2 months ago
.gitignore fireworks[patch]: Add Fireworks partner packages (#17694) 4 months ago
LICENSE fireworks[patch]: Add Fireworks partner packages (#17694) 4 months ago
Makefile fireworks: fix integration tests (#20973) 2 months ago
README.md Updated partners/fireworks README (#18267) 4 months ago
poetry.lock multiple: standard chat model tests (#20359) 3 months ago
pyproject.toml multiple: standard chat model tests (#20359) 3 months ago

README.md

LangChain-Fireworks

This is the partner package for tying Fireworks.ai and LangChain. Fireworks really strive to provide good support for LangChain use cases, so if you run into any issues please let us know. You can reach out to us in our Discord channel

Installation

To use the langchain-fireworks package, follow these installation steps:

pip install langchain-fireworks

Basic usage

Setting up

  1. Sign in to Fireworks AI to obtain an API Key to access the models, and make sure it is set as the FIREWORKS_API_KEY environment variable.

    Once you've signed in and obtained an API key, follow these steps to set the FIREWORKS_API_KEY environment variable:

    • Linux/macOS: Open your terminal and execute the following command:
    export FIREWORKS_API_KEY='your_api_key'
    

    Note: To make this environment variable persistent across terminal sessions, add the above line to your ~/.bashrc, ~/.bash_profile, or ~/.zshrc file.

    • Windows: For Command Prompt, use:
    set FIREWORKS_API_KEY=your_api_key
    
  2. Set up your model using a model id. If the model is not set, the default model is fireworks-llama-v2-7b-chat. See the full, most up-to-date model list on fireworks.ai.

import getpass
import os

# Initialize a Fireworks model
llm = Fireworks(
    model="accounts/fireworks/models/mixtral-8x7b-instruct",
    base_url="https://api.fireworks.ai/inference/v1/completions",
)

Calling the Model Directly

You can call the model directly with string prompts to get completions.

# Single prompt
output = llm.invoke("Who's the best quarterback in the NFL?")
print(output)
# Calling multiple prompts
output = llm.generate(
    [
        "Who's the best cricket player in 2016?",
        "Who's the best basketball player in the league?",
    ]
)
print(output.generations)

Advanced usage

Tool use: LangChain Agent + Fireworks function calling model

Please checkout how to teach Fireworks function calling model to use a calculator here.

Fireworks focus on delivering the best experience for fast model inference as well as tool use. You can check out our blog for more details on how it fares compares to GPT-4, the punchline is that it is on par with GPT-4 in terms just function calling use cases, but it is way faster and much cheaper.

RAG: LangChain agent + Fireworks function calling model + MongoDB + Nomic AI embeddings

Please check out the cookbook here for an end to end flow