langchain/libs/partners/groq
ccurme 4b6b0a87b6
groq[patch]: Make stream robust to ToolMessage (#20417)
```python
from langchain.agents import AgentExecutor, create_tool_calling_agent, tool
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_groq import ChatGroq


prompt = ChatPromptTemplate.from_messages(
    [
        ("system", "You are a helpful assistant"),
        ("human", "{input}"),
        MessagesPlaceholder("agent_scratchpad"),
    ]
)

model = ChatGroq(model_name="mixtral-8x7b-32768", temperature=0)

@tool
def magic_function(input: int) -> int:
    """Applies a magic function to an input."""
    return input + 2

tools = [magic_function]


agent = create_tool_calling_agent(model, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)

agent_executor.invoke({"input": "what is the value of magic_function(3)?"})
```
```
> Entering new AgentExecutor chain...

Invoking: `magic_function` with `{'input': 3}`


5The value of magic\_function(3) is 5.

> Finished chain.
{'input': 'what is the value of magic_function(3)?',
 'output': 'The value of magic\\_function(3) is 5.'}
```
2024-04-13 15:40:55 -07:00
..
langchain_groq groq[patch]: Make stream robust to ToolMessage (#20417) 2024-04-13 15:40:55 -07:00
scripts
tests multiple: standard chat model tests (#20359) 2024-04-11 18:23:13 -07:00
.gitignore
LICENSE
Makefile
poetry.lock multiple: standard chat model tests (#20359) 2024-04-11 18:23:13 -07:00
pyproject.toml multiple: standard chat model tests (#20359) 2024-04-11 18:23:13 -07:00
README.md

langchain-groq

Welcome to Groq! 🚀

At Groq, we've developed the world's first Language Processing Unit™, or LPU. The Groq LPU has a deterministic, single core streaming architecture that sets the standard for GenAI inference speed with predictable and repeatable performance for any given workload.

Beyond the architecture, our software is designed to empower developers like you with the tools you need to create innovative, powerful AI applications. With Groq as your engine, you can:

  • Achieve uncompromised low latency and performance for real-time AI and HPC inferences 🔥
  • Know the exact performance and compute time for any given workload 🔮
  • Take advantage of our cutting-edge technology to stay ahead of the competition 💪

Want more Groq? Check out our website for more resources and join our Discord community to connect with our developers!

Installation and Setup

Install the integration package:

pip install langchain-groq

Request an API key and set it as an environment variable

export GROQ_API_KEY=gsk_...

Chat Model

See a usage example.

Development

To develop the langchain-groq package, you'll need to follow these instructions:

Install dev dependencies

poetry install --with test,test_integration,lint,codespell

Build the package

poetry build

Run unit tests

Unit tests live in tests/unit_tests and SHOULD NOT require an internet connection or a valid API KEY. Run unit tests with

make tests

Run integration tests

Integration tests live in tests/integration_tests and require a connection to the Groq API and a valid API KEY.

make integration_tests

Lint & Format

Run additional tests and linters to ensure your code is up to standard.

make lint spell_check check_imports