2024-06-06 22:45:22 +00:00
|
|
|
aiosqlite>=0.19.0,<0.20
|
|
|
|
aleph-alpha-client>=2.15.0,<3
|
|
|
|
anthropic>=0.3.11,<0.4
|
|
|
|
arxiv>=1.4,<2
|
|
|
|
assemblyai>=0.17.0,<0.18
|
|
|
|
atlassian-python-api>=3.36.0,<4
|
|
|
|
azure-ai-documentintelligence>=1.0.0b1,<2
|
|
|
|
azure-identity>=1.15.0,<2
|
|
|
|
azure-search-documents==11.4.0
|
|
|
|
beautifulsoup4>=4,<5
|
|
|
|
bibtexparser>=1.4.0,<2
|
|
|
|
cassio>=0.1.6,<0.2
|
|
|
|
chardet>=5.1.0,<6
|
|
|
|
cloudpathlib>=0.18,<0.19
|
|
|
|
cloudpickle>=2.0.0
|
|
|
|
cohere>=4,<6
|
|
|
|
databricks-vectorsearch>=0.21,<0.22
|
|
|
|
datasets>=2.15.0,<3
|
|
|
|
dgml-utils>=0.3.0,<0.4
|
|
|
|
elasticsearch>=8.12.0,<9
|
|
|
|
esprima>=4.0.1,<5
|
|
|
|
faiss-cpu>=1,<2
|
|
|
|
feedparser>=6.0.10,<7
|
|
|
|
fireworks-ai>=0.9.0,<0.10
|
|
|
|
friendli-client>=1.2.4,<2
|
2024-06-28 20:35:38 +00:00
|
|
|
geopandas>=0.13.1
|
2024-06-06 22:45:22 +00:00
|
|
|
gitpython>=3.1.32,<4
|
2024-07-19 15:34:54 +00:00
|
|
|
gliner>=0.2.7
|
2024-06-06 22:45:22 +00:00
|
|
|
google-cloud-documentai>=2.20.1,<3
|
|
|
|
gql>=3.4.1,<4
|
|
|
|
gradientai>=1.4.0,<2
|
|
|
|
hdbcli>=2.19.21,<3
|
|
|
|
hologres-vector==0.0.6
|
|
|
|
html2text>=2020.1.16
|
|
|
|
httpx>=0.24.1,<0.25
|
|
|
|
httpx-sse>=0.4.0,<0.5
|
|
|
|
javelin-sdk>=0.1.8,<0.2
|
|
|
|
jinja2>=3,<4
|
|
|
|
jq>=1.4.1,<2
|
|
|
|
jsonschema>1
|
2024-07-19 16:25:07 +00:00
|
|
|
keybert>=0.8.5
|
feat(community): add tools support for litellm (#23906)
I used the following example to validate the behavior
```python
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import ConfigurableField
from langchain_anthropic import ChatAnthropic
from langchain_community.chat_models import ChatLiteLLM
from langchain_core.tools import tool
from langchain.agents import create_tool_calling_agent, AgentExecutor
@tool
def multiply(x: float, y: float) -> float:
"""Multiply 'x' times 'y'."""
return x * y
@tool
def exponentiate(x: float, y: float) -> float:
"""Raise 'x' to the 'y'."""
return x**y
@tool
def add(x: float, y: float) -> float:
"""Add 'x' and 'y'."""
return x + y
prompt = ChatPromptTemplate.from_messages([
("system", "you're a helpful assistant"),
("human", "{input}"),
("placeholder", "{agent_scratchpad}"),
])
tools = [multiply, exponentiate, add]
llm = ChatAnthropic(model="claude-3-sonnet-20240229", temperature=0)
# llm = ChatLiteLLM(model="claude-3-sonnet-20240229", temperature=0)
agent = create_tool_calling_agent(llm, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
agent_executor.invoke({"input": "what's 3 plus 5 raised to the 2.743. also what's 17.24 - 918.1241", })
```
`ChatAnthropic` version works:
```
> Entering new AgentExecutor chain...
Invoking: `exponentiate` with `{'x': 5, 'y': 2.743}`
responded: [{'text': 'To calculate 3 + 5^2.743, we can use the "exponentiate" and "add" tools:', 'type': 'text', 'index': 0}, {'id': 'toolu_01Gf54DFTkfLMJQX3TXffmxe', 'input': {}, 'name': 'exponentiate', 'type': 'tool_use', 'index': 1, 'partial_json': '{"x": 5, "y": 2.743}'}]
82.65606421491815
Invoking: `add` with `{'x': 3, 'y': 82.65606421491815}`
responded: [{'id': 'toolu_01XUq9S56GT3Yv2N1KmNmmWp', 'input': {}, 'name': 'add', 'type': 'tool_use', 'index': 0, 'partial_json': '{"x": 3, "y": 82.65606421491815}'}]
85.65606421491815
Invoking: `add` with `{'x': 17.24, 'y': -918.1241}`
responded: [{'text': '\n\nSo 3 + 5^2.743 = 85.66\n\nTo calculate 17.24 - 918.1241, we can use:', 'type': 'text', 'index': 0}, {'id': 'toolu_01BkXTwP7ec9JKYtZPy5JKjm', 'input': {}, 'name': 'add', 'type': 'tool_use', 'index': 1, 'partial_json': '{"x": 17.24, "y": -918.1241}'}]
-900.8841[{'text': '\n\nTherefore, 17.24 - 918.1241 = -900.88', 'type': 'text', 'index': 0}]
> Finished chain.
```
While `ChatLiteLLM` version doesn't.
But with the changes in this PR, along with:
- https://github.com/langchain-ai/langchain/pull/23823
- https://github.com/BerriAI/litellm/pull/4554
The result is _almost_ the same:
```
> Entering new AgentExecutor chain...
Invoking: `exponentiate` with `{'x': 5, 'y': 2.743}`
responded: To calculate 3 + 5^2.743, we can use the "exponentiate" and "add" tools:
82.65606421491815
Invoking: `add` with `{'x': 3, 'y': 82.65606421491815}`
85.65606421491815
Invoking: `add` with `{'x': 17.24, 'y': -918.1241}`
responded:
So 3 + 5^2.743 = 85.66
To calculate 17.24 - 918.1241, we can use:
-900.8841
Therefore, 17.24 - 918.1241 = -900.88
> Finished chain.
```
If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
Co-authored-by: ccurme <chester.curme@gmail.com>
2024-07-30 15:39:34 +00:00
|
|
|
litellm>=1.30,<=1.39.5
|
2024-06-06 22:45:22 +00:00
|
|
|
lxml>=4.9.3,<6.0
|
|
|
|
markdownify>=0.11.6,<0.12
|
|
|
|
motor>=3.3.1,<4
|
|
|
|
msal>=1.25.0,<2
|
|
|
|
mwparserfromhell>=0.6.4,<0.7
|
|
|
|
mwxml>=0.3.3,<0.4
|
|
|
|
newspaper3k>=0.2.8,<0.3
|
|
|
|
numexpr>=2.8.6,<3
|
|
|
|
nvidia-riva-client>=2.14.0,<3
|
2024-06-24 18:48:23 +00:00
|
|
|
oci>=2.128.0,<3
|
2024-06-06 22:45:22 +00:00
|
|
|
openai<2
|
|
|
|
openapi-pydantic>=0.3.2,<0.4
|
|
|
|
oracle-ads>=2.9.1,<3
|
|
|
|
oracledb>=2.2.0,<3
|
|
|
|
pandas>=2.0.1,<3
|
2024-07-08 20:55:19 +00:00
|
|
|
pdfminer-six>=20221105,<20240706
|
2024-06-06 22:45:22 +00:00
|
|
|
pgvector>=0.1.6,<0.2
|
|
|
|
praw>=7.7.1,<8
|
|
|
|
premai>=0.3.25,<0.4
|
|
|
|
psychicapi>=0.8.0,<0.9
|
|
|
|
py-trello>=0.19.0,<0.20
|
|
|
|
pyjwt>=2.8.0,<3
|
|
|
|
pymupdf>=1.22.3,<2
|
2024-07-17 20:47:09 +00:00
|
|
|
pypdf>=3.4.0,<5
|
2024-06-06 22:45:22 +00:00
|
|
|
pypdfium2>=4.10.0,<5
|
|
|
|
pyspark>=3.4.0,<4
|
|
|
|
rank-bm25>=0.2.2,<0.3
|
|
|
|
rapidfuzz>=3.1.1,<4
|
|
|
|
rapidocr-onnxruntime>=1.3.2,<2
|
|
|
|
rdflib==7.0.0
|
|
|
|
requests-toolbelt>=1.0.0,<2
|
|
|
|
rspace_client>=2.5.0,<3
|
|
|
|
scikit-learn>=1.2.2,<2
|
2024-08-23 14:41:39 +00:00
|
|
|
simsimd>=5.0.0,<6
|
2024-06-06 22:45:22 +00:00
|
|
|
sqlite-vss>=0.1.2,<0.2
|
2024-07-15 21:46:58 +00:00
|
|
|
sseclient-py>=1.8.0,<2
|
2024-06-06 22:45:22 +00:00
|
|
|
streamlit>=1.18.0,<2
|
|
|
|
sympy>=1.12,<2
|
|
|
|
telethon>=1.28.5,<2
|
|
|
|
tidb-vector>=0.0.3,<1.0.0
|
|
|
|
timescale-vector==0.0.1
|
|
|
|
tqdm>=4.48.0
|
|
|
|
tree-sitter>=0.20.2,<0.21
|
|
|
|
tree-sitter-languages>=1.8.0,<2
|
2024-06-07 21:02:06 +00:00
|
|
|
upstash-redis>=1.1.0,<2
|
|
|
|
upstash-ratelimit>=1.1.0,<2
|
2024-07-26 02:13:04 +00:00
|
|
|
vdms>=0.0.20
|
2024-06-06 22:45:22 +00:00
|
|
|
xata>=1.0.0a7,<2
|
|
|
|
xmltodict>=0.13.0,<0.14
|
2024-07-24 13:52:15 +00:00
|
|
|
nanopq==0.2.1
|
community[patch]: support bind_tools for ChatMlflow (#24547)
Thank you for contributing to LangChain!
- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
- Example: "community: add foobar LLM"
- **Description:**
Support ChatMlflow.bind_tools method
Tested in Databricks:
<img width="836" alt="image"
src="https://github.com/user-attachments/assets/fa28ef50-0110-4698-8eda-4faf6f0b9ef8">
- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.
- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/
Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.
If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
---------
Signed-off-by: Serena Ruan <serena.rxy@gmail.com>
2024-08-01 15:43:07 +00:00
|
|
|
mlflow[genai]>=2.14.0
|