.. | ||
rag_codellama_fireworks | ||
tests | ||
LICENSE | ||
poetry.lock | ||
pyproject.toml | ||
rag_codellama_fireworks.ipynb | ||
README.md |
rag-codellama-fireworks
This template performs RAG on a codebase.
It uses codellama-34b hosted by Fireworks' LLM inference API.
Environment Setup
Set the FIREWORKS_API_KEY
environment variable to access the Fireworks models.
You can obtain it from here.
Usage
To use this package, you should first have the LangChain CLI installed:
pip install -U langchain-cli
To create a new LangChain project and install this as the only package, you can do:
langchain app new my-app --package rag-codellama-fireworks
If you want to add this to an existing project, you can just run:
langchain app add rag-codellama-fireworks
And add the following code to your server.py
file:
from rag_codellama_fireworks import chain as rag_codellama_fireworks_chain
add_routes(app, rag_codellama_fireworks_chain, path="/rag-codellama-fireworks")
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. LangSmith is currently in private beta, you can sign up here. If you don't have access, you can skip this section
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
If you are inside this directory, then you can spin up a LangServe instance directly by:
langchain serve
This will start the FastAPI app with a server is running locally at http://localhost:8000
We can see all templates at http://127.0.0.1:8000/docs We can access the playground at http://127.0.0.1:8000/rag-codellama-fireworks/playground
We can access the template from code with:
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rag-codellama-fireworks")