You cannot select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
langchain/libs/community/tests/integration_tests/chat_models/test_konko.py

218 lines
7.7 KiB
Python

add konko chat_model files (#10267) _Thank you to the LangChain team for the great project and in advance for your review. Let me know if I can provide any other additional information or do things differently in the future to make your lives easier 🙏 _ @hwchase17 please let me know if you're not the right person to review 😄 This PR enables LangChain to access the Konko API via the chat_models API wrapper. Konko API is a fully managed API designed to help application developers: 1. Select the right LLM(s) for their application 2. Prototype with various open-source and proprietary LLMs 3. Move to production in-line with their security, privacy, throughput, latency SLAs without infrastructure set-up or administration using Konko AI's SOC 2 compliant infrastructure _Note on integration tests:_ We added 14 integration tests. They will all fail unless you export the right API keys. 13 will pass with a KONKO_API_KEY provided and the other one will pass with a OPENAI_API_KEY provided. When both are provided, all 14 integration tests pass. If you would like to test this yourself, please let me know and I can provide some temporary keys. ### Installation and Setup 1. **First you'll need an API key** 2. **Install Konko AI's Python SDK** 1. Enable a Python3.8+ environment `pip install konko` 3. **Set API Keys** **Option 1:** Set Environment Variables You can set environment variables for 1. KONKO_API_KEY (Required) 2. OPENAI_API_KEY (Optional) In your current shell session, use the export command: `export KONKO_API_KEY={your_KONKO_API_KEY_here}` `export OPENAI_API_KEY={your_OPENAI_API_KEY_here} #Optional` Alternatively, you can add the above lines directly to your shell startup script (such as .bashrc or .bash_profile for Bash shell and .zshrc for Zsh shell) to have them set automatically every time a new shell session starts. **Option 2:** Set API Keys Programmatically If you prefer to set your API keys directly within your Python script or Jupyter notebook, you can use the following commands: ```python konko.set_api_key('your_KONKO_API_KEY_here') konko.set_openai_api_key('your_OPENAI_API_KEY_here') # Optional ``` ### Calling a model Find a model on the [[Konko Introduction page](https://docs.konko.ai/docs#available-models)](https://docs.konko.ai/docs#available-models) For example, for this [[LLama 2 model](https://docs.konko.ai/docs/meta-llama-2-13b-chat)](https://docs.konko.ai/docs/meta-llama-2-13b-chat). The model id would be: `"meta-llama/Llama-2-13b-chat-hf"` Another way to find the list of models running on the Konko instance is through this [[endpoint](https://docs.konko.ai/reference/listmodels)](https://docs.konko.ai/reference/listmodels). From here, we can initialize our model: ```python chat_instance = ChatKonko(max_tokens=10, model = 'meta-llama/Llama-2-13b-chat-hf') ``` And run it: ```python msg = HumanMessage(content="Hi") chat_response = chat_instance([msg]) ```
9 months ago
"""Evaluate ChatKonko Interface."""
from typing import Any, cast
add konko chat_model files (#10267) _Thank you to the LangChain team for the great project and in advance for your review. Let me know if I can provide any other additional information or do things differently in the future to make your lives easier 🙏 _ @hwchase17 please let me know if you're not the right person to review 😄 This PR enables LangChain to access the Konko API via the chat_models API wrapper. Konko API is a fully managed API designed to help application developers: 1. Select the right LLM(s) for their application 2. Prototype with various open-source and proprietary LLMs 3. Move to production in-line with their security, privacy, throughput, latency SLAs without infrastructure set-up or administration using Konko AI's SOC 2 compliant infrastructure _Note on integration tests:_ We added 14 integration tests. They will all fail unless you export the right API keys. 13 will pass with a KONKO_API_KEY provided and the other one will pass with a OPENAI_API_KEY provided. When both are provided, all 14 integration tests pass. If you would like to test this yourself, please let me know and I can provide some temporary keys. ### Installation and Setup 1. **First you'll need an API key** 2. **Install Konko AI's Python SDK** 1. Enable a Python3.8+ environment `pip install konko` 3. **Set API Keys** **Option 1:** Set Environment Variables You can set environment variables for 1. KONKO_API_KEY (Required) 2. OPENAI_API_KEY (Optional) In your current shell session, use the export command: `export KONKO_API_KEY={your_KONKO_API_KEY_here}` `export OPENAI_API_KEY={your_OPENAI_API_KEY_here} #Optional` Alternatively, you can add the above lines directly to your shell startup script (such as .bashrc or .bash_profile for Bash shell and .zshrc for Zsh shell) to have them set automatically every time a new shell session starts. **Option 2:** Set API Keys Programmatically If you prefer to set your API keys directly within your Python script or Jupyter notebook, you can use the following commands: ```python konko.set_api_key('your_KONKO_API_KEY_here') konko.set_openai_api_key('your_OPENAI_API_KEY_here') # Optional ``` ### Calling a model Find a model on the [[Konko Introduction page](https://docs.konko.ai/docs#available-models)](https://docs.konko.ai/docs#available-models) For example, for this [[LLama 2 model](https://docs.konko.ai/docs/meta-llama-2-13b-chat)](https://docs.konko.ai/docs/meta-llama-2-13b-chat). The model id would be: `"meta-llama/Llama-2-13b-chat-hf"` Another way to find the list of models running on the Konko instance is through this [[endpoint](https://docs.konko.ai/reference/listmodels)](https://docs.konko.ai/reference/listmodels). From here, we can initialize our model: ```python chat_instance = ChatKonko(max_tokens=10, model = 'meta-llama/Llama-2-13b-chat-hf') ``` And run it: ```python msg = HumanMessage(content="Hi") chat_response = chat_instance([msg]) ```
9 months ago
import pytest
community[major], core[patch], langchain[patch], experimental[patch]: Create langchain-community (#14463) Moved the following modules to new package langchain-community in a backwards compatible fashion: ``` mv langchain/langchain/adapters community/langchain_community mv langchain/langchain/callbacks community/langchain_community/callbacks mv langchain/langchain/chat_loaders community/langchain_community mv langchain/langchain/chat_models community/langchain_community mv langchain/langchain/document_loaders community/langchain_community mv langchain/langchain/docstore community/langchain_community mv langchain/langchain/document_transformers community/langchain_community mv langchain/langchain/embeddings community/langchain_community mv langchain/langchain/graphs community/langchain_community mv langchain/langchain/llms community/langchain_community mv langchain/langchain/memory/chat_message_histories community/langchain_community mv langchain/langchain/retrievers community/langchain_community mv langchain/langchain/storage community/langchain_community mv langchain/langchain/tools community/langchain_community mv langchain/langchain/utilities community/langchain_community mv langchain/langchain/vectorstores community/langchain_community mv langchain/langchain/agents/agent_toolkits community/langchain_community mv langchain/langchain/cache.py community/langchain_community mv langchain/langchain/adapters community/langchain_community mv langchain/langchain/callbacks community/langchain_community/callbacks mv langchain/langchain/chat_loaders community/langchain_community mv langchain/langchain/chat_models community/langchain_community mv langchain/langchain/document_loaders community/langchain_community mv langchain/langchain/docstore community/langchain_community mv langchain/langchain/document_transformers community/langchain_community mv langchain/langchain/embeddings community/langchain_community mv langchain/langchain/graphs community/langchain_community mv langchain/langchain/llms community/langchain_community mv langchain/langchain/memory/chat_message_histories community/langchain_community mv langchain/langchain/retrievers community/langchain_community mv langchain/langchain/storage community/langchain_community mv langchain/langchain/tools community/langchain_community mv langchain/langchain/utilities community/langchain_community mv langchain/langchain/vectorstores community/langchain_community mv langchain/langchain/agents/agent_toolkits community/langchain_community mv langchain/langchain/cache.py community/langchain_community ``` Moved the following to core ``` mv langchain/langchain/utils/json_schema.py core/langchain_core/utils mv langchain/langchain/utils/html.py core/langchain_core/utils mv langchain/langchain/utils/strings.py core/langchain_core/utils cat langchain/langchain/utils/env.py >> core/langchain_core/utils/env.py rm langchain/langchain/utils/env.py ``` See .scripts/community_split/script_integrations.sh for all changes
6 months ago
from langchain_core.callbacks import CallbackManager
from langchain_core.messages import BaseMessage, HumanMessage, SystemMessage
from langchain_core.outputs import ChatGeneration, ChatResult, LLMResult
from langchain_core.pydantic_v1 import SecretStr
from pytest import CaptureFixture, MonkeyPatch
community[major], core[patch], langchain[patch], experimental[patch]: Create langchain-community (#14463) Moved the following modules to new package langchain-community in a backwards compatible fashion: ``` mv langchain/langchain/adapters community/langchain_community mv langchain/langchain/callbacks community/langchain_community/callbacks mv langchain/langchain/chat_loaders community/langchain_community mv langchain/langchain/chat_models community/langchain_community mv langchain/langchain/document_loaders community/langchain_community mv langchain/langchain/docstore community/langchain_community mv langchain/langchain/document_transformers community/langchain_community mv langchain/langchain/embeddings community/langchain_community mv langchain/langchain/graphs community/langchain_community mv langchain/langchain/llms community/langchain_community mv langchain/langchain/memory/chat_message_histories community/langchain_community mv langchain/langchain/retrievers community/langchain_community mv langchain/langchain/storage community/langchain_community mv langchain/langchain/tools community/langchain_community mv langchain/langchain/utilities community/langchain_community mv langchain/langchain/vectorstores community/langchain_community mv langchain/langchain/agents/agent_toolkits community/langchain_community mv langchain/langchain/cache.py community/langchain_community mv langchain/langchain/adapters community/langchain_community mv langchain/langchain/callbacks community/langchain_community/callbacks mv langchain/langchain/chat_loaders community/langchain_community mv langchain/langchain/chat_models community/langchain_community mv langchain/langchain/document_loaders community/langchain_community mv langchain/langchain/docstore community/langchain_community mv langchain/langchain/document_transformers community/langchain_community mv langchain/langchain/embeddings community/langchain_community mv langchain/langchain/graphs community/langchain_community mv langchain/langchain/llms community/langchain_community mv langchain/langchain/memory/chat_message_histories community/langchain_community mv langchain/langchain/retrievers community/langchain_community mv langchain/langchain/storage community/langchain_community mv langchain/langchain/tools community/langchain_community mv langchain/langchain/utilities community/langchain_community mv langchain/langchain/vectorstores community/langchain_community mv langchain/langchain/agents/agent_toolkits community/langchain_community mv langchain/langchain/cache.py community/langchain_community ``` Moved the following to core ``` mv langchain/langchain/utils/json_schema.py core/langchain_core/utils mv langchain/langchain/utils/html.py core/langchain_core/utils mv langchain/langchain/utils/strings.py core/langchain_core/utils cat langchain/langchain/utils/env.py >> core/langchain_core/utils/env.py rm langchain/langchain/utils/env.py ``` See .scripts/community_split/script_integrations.sh for all changes
6 months ago
from langchain_community.chat_models.konko import ChatKonko
add konko chat_model files (#10267) _Thank you to the LangChain team for the great project and in advance for your review. Let me know if I can provide any other additional information or do things differently in the future to make your lives easier 🙏 _ @hwchase17 please let me know if you're not the right person to review 😄 This PR enables LangChain to access the Konko API via the chat_models API wrapper. Konko API is a fully managed API designed to help application developers: 1. Select the right LLM(s) for their application 2. Prototype with various open-source and proprietary LLMs 3. Move to production in-line with their security, privacy, throughput, latency SLAs without infrastructure set-up or administration using Konko AI's SOC 2 compliant infrastructure _Note on integration tests:_ We added 14 integration tests. They will all fail unless you export the right API keys. 13 will pass with a KONKO_API_KEY provided and the other one will pass with a OPENAI_API_KEY provided. When both are provided, all 14 integration tests pass. If you would like to test this yourself, please let me know and I can provide some temporary keys. ### Installation and Setup 1. **First you'll need an API key** 2. **Install Konko AI's Python SDK** 1. Enable a Python3.8+ environment `pip install konko` 3. **Set API Keys** **Option 1:** Set Environment Variables You can set environment variables for 1. KONKO_API_KEY (Required) 2. OPENAI_API_KEY (Optional) In your current shell session, use the export command: `export KONKO_API_KEY={your_KONKO_API_KEY_here}` `export OPENAI_API_KEY={your_OPENAI_API_KEY_here} #Optional` Alternatively, you can add the above lines directly to your shell startup script (such as .bashrc or .bash_profile for Bash shell and .zshrc for Zsh shell) to have them set automatically every time a new shell session starts. **Option 2:** Set API Keys Programmatically If you prefer to set your API keys directly within your Python script or Jupyter notebook, you can use the following commands: ```python konko.set_api_key('your_KONKO_API_KEY_here') konko.set_openai_api_key('your_OPENAI_API_KEY_here') # Optional ``` ### Calling a model Find a model on the [[Konko Introduction page](https://docs.konko.ai/docs#available-models)](https://docs.konko.ai/docs#available-models) For example, for this [[LLama 2 model](https://docs.konko.ai/docs/meta-llama-2-13b-chat)](https://docs.konko.ai/docs/meta-llama-2-13b-chat). The model id would be: `"meta-llama/Llama-2-13b-chat-hf"` Another way to find the list of models running on the Konko instance is through this [[endpoint](https://docs.konko.ai/reference/listmodels)](https://docs.konko.ai/reference/listmodels). From here, we can initialize our model: ```python chat_instance = ChatKonko(max_tokens=10, model = 'meta-llama/Llama-2-13b-chat-hf') ``` And run it: ```python msg = HumanMessage(content="Hi") chat_response = chat_instance([msg]) ```
9 months ago
from tests.unit_tests.callbacks.fake_callback_handler import FakeCallbackHandler
def test_konko_key_masked_when_passed_from_env(
monkeypatch: MonkeyPatch, capsys: CaptureFixture
) -> None:
"""Test initialization with an API key provided via an env variable"""
monkeypatch.setenv("OPENAI_API_KEY", "test-openai-key")
monkeypatch.setenv("KONKO_API_KEY", "test-konko-key")
chat = ChatKonko()
print(chat.openai_api_key, end="") # noqa: T201
captured = capsys.readouterr()
assert captured.out == "**********"
print(chat.konko_api_key, end="") # noqa: T201
captured = capsys.readouterr()
assert captured.out == "**********"
def test_konko_key_masked_when_passed_via_constructor(
capsys: CaptureFixture,
) -> None:
"""Test initialization with an API key provided via the initializer"""
chat = ChatKonko(openai_api_key="test-openai-key", konko_api_key="test-konko-key")
print(chat.konko_api_key, end="") # noqa: T201
captured = capsys.readouterr()
assert captured.out == "**********"
print(chat.konko_secret_key, end="") # type: ignore[attr-defined] # noqa: T201
captured = capsys.readouterr()
assert captured.out == "**********"
def test_uses_actual_secret_value_from_secret_str() -> None:
"""Test that actual secret is retrieved using `.get_secret_value()`."""
chat = ChatKonko(openai_api_key="test-openai-key", konko_api_key="test-konko-key")
assert cast(SecretStr, chat.konko_api_key).get_secret_value() == "test-openai-key"
assert cast(SecretStr, chat.konko_secret_key).get_secret_value() == "test-konko-key" # type: ignore[attr-defined]
add konko chat_model files (#10267) _Thank you to the LangChain team for the great project and in advance for your review. Let me know if I can provide any other additional information or do things differently in the future to make your lives easier 🙏 _ @hwchase17 please let me know if you're not the right person to review 😄 This PR enables LangChain to access the Konko API via the chat_models API wrapper. Konko API is a fully managed API designed to help application developers: 1. Select the right LLM(s) for their application 2. Prototype with various open-source and proprietary LLMs 3. Move to production in-line with their security, privacy, throughput, latency SLAs without infrastructure set-up or administration using Konko AI's SOC 2 compliant infrastructure _Note on integration tests:_ We added 14 integration tests. They will all fail unless you export the right API keys. 13 will pass with a KONKO_API_KEY provided and the other one will pass with a OPENAI_API_KEY provided. When both are provided, all 14 integration tests pass. If you would like to test this yourself, please let me know and I can provide some temporary keys. ### Installation and Setup 1. **First you'll need an API key** 2. **Install Konko AI's Python SDK** 1. Enable a Python3.8+ environment `pip install konko` 3. **Set API Keys** **Option 1:** Set Environment Variables You can set environment variables for 1. KONKO_API_KEY (Required) 2. OPENAI_API_KEY (Optional) In your current shell session, use the export command: `export KONKO_API_KEY={your_KONKO_API_KEY_here}` `export OPENAI_API_KEY={your_OPENAI_API_KEY_here} #Optional` Alternatively, you can add the above lines directly to your shell startup script (such as .bashrc or .bash_profile for Bash shell and .zshrc for Zsh shell) to have them set automatically every time a new shell session starts. **Option 2:** Set API Keys Programmatically If you prefer to set your API keys directly within your Python script or Jupyter notebook, you can use the following commands: ```python konko.set_api_key('your_KONKO_API_KEY_here') konko.set_openai_api_key('your_OPENAI_API_KEY_here') # Optional ``` ### Calling a model Find a model on the [[Konko Introduction page](https://docs.konko.ai/docs#available-models)](https://docs.konko.ai/docs#available-models) For example, for this [[LLama 2 model](https://docs.konko.ai/docs/meta-llama-2-13b-chat)](https://docs.konko.ai/docs/meta-llama-2-13b-chat). The model id would be: `"meta-llama/Llama-2-13b-chat-hf"` Another way to find the list of models running on the Konko instance is through this [[endpoint](https://docs.konko.ai/reference/listmodels)](https://docs.konko.ai/reference/listmodels). From here, we can initialize our model: ```python chat_instance = ChatKonko(max_tokens=10, model = 'meta-llama/Llama-2-13b-chat-hf') ``` And run it: ```python msg = HumanMessage(content="Hi") chat_response = chat_instance([msg]) ```
9 months ago
def test_konko_chat_test() -> None:
"""Evaluate basic ChatKonko functionality."""
chat_instance = ChatKonko(max_tokens=10)
msg = HumanMessage(content="Hi")
chat_response = chat_instance([msg])
assert isinstance(chat_response, BaseMessage)
assert isinstance(chat_response.content, str)
def test_konko_chat_test_openai() -> None:
"""Evaluate basic ChatKonko functionality."""
chat_instance = ChatKonko(max_tokens=10, model="meta-llama/llama-2-70b-chat")
add konko chat_model files (#10267) _Thank you to the LangChain team for the great project and in advance for your review. Let me know if I can provide any other additional information or do things differently in the future to make your lives easier 🙏 _ @hwchase17 please let me know if you're not the right person to review 😄 This PR enables LangChain to access the Konko API via the chat_models API wrapper. Konko API is a fully managed API designed to help application developers: 1. Select the right LLM(s) for their application 2. Prototype with various open-source and proprietary LLMs 3. Move to production in-line with their security, privacy, throughput, latency SLAs without infrastructure set-up or administration using Konko AI's SOC 2 compliant infrastructure _Note on integration tests:_ We added 14 integration tests. They will all fail unless you export the right API keys. 13 will pass with a KONKO_API_KEY provided and the other one will pass with a OPENAI_API_KEY provided. When both are provided, all 14 integration tests pass. If you would like to test this yourself, please let me know and I can provide some temporary keys. ### Installation and Setup 1. **First you'll need an API key** 2. **Install Konko AI's Python SDK** 1. Enable a Python3.8+ environment `pip install konko` 3. **Set API Keys** **Option 1:** Set Environment Variables You can set environment variables for 1. KONKO_API_KEY (Required) 2. OPENAI_API_KEY (Optional) In your current shell session, use the export command: `export KONKO_API_KEY={your_KONKO_API_KEY_here}` `export OPENAI_API_KEY={your_OPENAI_API_KEY_here} #Optional` Alternatively, you can add the above lines directly to your shell startup script (such as .bashrc or .bash_profile for Bash shell and .zshrc for Zsh shell) to have them set automatically every time a new shell session starts. **Option 2:** Set API Keys Programmatically If you prefer to set your API keys directly within your Python script or Jupyter notebook, you can use the following commands: ```python konko.set_api_key('your_KONKO_API_KEY_here') konko.set_openai_api_key('your_OPENAI_API_KEY_here') # Optional ``` ### Calling a model Find a model on the [[Konko Introduction page](https://docs.konko.ai/docs#available-models)](https://docs.konko.ai/docs#available-models) For example, for this [[LLama 2 model](https://docs.konko.ai/docs/meta-llama-2-13b-chat)](https://docs.konko.ai/docs/meta-llama-2-13b-chat). The model id would be: `"meta-llama/Llama-2-13b-chat-hf"` Another way to find the list of models running on the Konko instance is through this [[endpoint](https://docs.konko.ai/reference/listmodels)](https://docs.konko.ai/reference/listmodels). From here, we can initialize our model: ```python chat_instance = ChatKonko(max_tokens=10, model = 'meta-llama/Llama-2-13b-chat-hf') ``` And run it: ```python msg = HumanMessage(content="Hi") chat_response = chat_instance([msg]) ```
9 months ago
msg = HumanMessage(content="Hi")
chat_response = chat_instance([msg])
assert isinstance(chat_response, BaseMessage)
assert isinstance(chat_response.content, str)
def test_konko_model_test() -> None:
"""Check how ChatKonko manages model_name."""
chat_instance = ChatKonko(model="alpha")
assert chat_instance.model == "alpha"
chat_instance = ChatKonko(model="beta")
assert chat_instance.model == "beta"
def test_konko_available_model_test() -> None:
"""Check how ChatKonko manages model_name."""
chat_instance = ChatKonko(max_tokens=10, n=2)
res = chat_instance.get_available_models()
assert isinstance(res, set)
def test_konko_system_msg_test() -> None:
"""Evaluate ChatKonko's handling of system messages."""
chat_instance = ChatKonko(max_tokens=10)
sys_msg = SystemMessage(content="Initiate user chat.")
user_msg = HumanMessage(content="Hi there")
chat_response = chat_instance([sys_msg, user_msg])
assert isinstance(chat_response, BaseMessage)
assert isinstance(chat_response.content, str)
def test_konko_generation_test() -> None:
"""Check ChatKonko's generation ability."""
chat_instance = ChatKonko(max_tokens=10, n=2)
msg = HumanMessage(content="Hi")
gen_response = chat_instance.generate([[msg], [msg]])
assert isinstance(gen_response, LLMResult)
assert len(gen_response.generations) == 2
for gen_list in gen_response.generations:
assert len(gen_list) == 2
for gen in gen_list:
assert isinstance(gen, ChatGeneration)
assert isinstance(gen.text, str)
assert gen.text == gen.message.content
def test_konko_multiple_outputs_test() -> None:
"""Test multiple completions with ChatKonko."""
chat_instance = ChatKonko(max_tokens=10, n=5)
msg = HumanMessage(content="Hi")
gen_response = chat_instance._generate([msg])
assert isinstance(gen_response, ChatResult)
assert len(gen_response.generations) == 5
for gen in gen_response.generations:
assert isinstance(gen.message, BaseMessage)
assert isinstance(gen.message.content, str)
def test_konko_streaming_callback_test() -> None:
"""Evaluate streaming's token callback functionality."""
callback_instance = FakeCallbackHandler()
callback_mgr = CallbackManager([callback_instance])
chat_instance = ChatKonko(
max_tokens=10,
streaming=True,
temperature=0,
callback_manager=callback_mgr,
verbose=True,
)
msg = HumanMessage(content="Hi")
chat_response = chat_instance([msg])
assert callback_instance.llm_streams > 0
assert isinstance(chat_response, BaseMessage)
def test_konko_streaming_info_test() -> None:
"""Ensure generation details are retained during streaming."""
class TestCallback(FakeCallbackHandler):
data_store: dict = {}
def on_llm_end(self, *args: Any, **kwargs: Any) -> Any:
self.data_store["generation"] = args[0]
callback_instance = TestCallback()
callback_mgr = CallbackManager([callback_instance])
chat_instance = ChatKonko(
max_tokens=2,
temperature=0,
callback_manager=callback_mgr,
)
list(chat_instance.stream("hey"))
gen_data = callback_instance.data_store["generation"]
assert gen_data.generations[0][0].text == " Hey"
def test_konko_llm_model_name_test() -> None:
"""Check if llm_output has model info."""
chat_instance = ChatKonko(max_tokens=10)
msg = HumanMessage(content="Hi")
llm_data = chat_instance.generate([[msg]])
assert llm_data.llm_output is not None
assert llm_data.llm_output["model_name"] == chat_instance.model
def test_konko_streaming_model_name_test() -> None:
"""Check model info during streaming."""
chat_instance = ChatKonko(max_tokens=10, streaming=True)
msg = HumanMessage(content="Hi")
llm_data = chat_instance.generate([[msg]])
assert llm_data.llm_output is not None
assert llm_data.llm_output["model_name"] == chat_instance.model
def test_konko_streaming_param_validation_test() -> None:
"""Ensure correct token callback during streaming."""
with pytest.raises(ValueError):
ChatKonko(
max_tokens=10,
streaming=True,
temperature=0,
n=5,
)
def test_konko_additional_args_test() -> None:
"""Evaluate extra arguments for ChatKonko."""
chat_instance = ChatKonko(extra=3, max_tokens=10)
assert chat_instance.max_tokens == 10
assert chat_instance.model_kwargs == {"extra": 3}
chat_instance = ChatKonko(extra=3, model_kwargs={"addition": 2})
assert chat_instance.model_kwargs == {"extra": 3, "addition": 2}
with pytest.raises(ValueError):
ChatKonko(extra=3, model_kwargs={"extra": 2})
with pytest.raises(ValueError):
ChatKonko(model_kwargs={"temperature": 0.2})
with pytest.raises(ValueError):
ChatKonko(model_kwargs={"model": "gpt-3.5-turbo-instruct"})
add konko chat_model files (#10267) _Thank you to the LangChain team for the great project and in advance for your review. Let me know if I can provide any other additional information or do things differently in the future to make your lives easier 🙏 _ @hwchase17 please let me know if you're not the right person to review 😄 This PR enables LangChain to access the Konko API via the chat_models API wrapper. Konko API is a fully managed API designed to help application developers: 1. Select the right LLM(s) for their application 2. Prototype with various open-source and proprietary LLMs 3. Move to production in-line with their security, privacy, throughput, latency SLAs without infrastructure set-up or administration using Konko AI's SOC 2 compliant infrastructure _Note on integration tests:_ We added 14 integration tests. They will all fail unless you export the right API keys. 13 will pass with a KONKO_API_KEY provided and the other one will pass with a OPENAI_API_KEY provided. When both are provided, all 14 integration tests pass. If you would like to test this yourself, please let me know and I can provide some temporary keys. ### Installation and Setup 1. **First you'll need an API key** 2. **Install Konko AI's Python SDK** 1. Enable a Python3.8+ environment `pip install konko` 3. **Set API Keys** **Option 1:** Set Environment Variables You can set environment variables for 1. KONKO_API_KEY (Required) 2. OPENAI_API_KEY (Optional) In your current shell session, use the export command: `export KONKO_API_KEY={your_KONKO_API_KEY_here}` `export OPENAI_API_KEY={your_OPENAI_API_KEY_here} #Optional` Alternatively, you can add the above lines directly to your shell startup script (such as .bashrc or .bash_profile for Bash shell and .zshrc for Zsh shell) to have them set automatically every time a new shell session starts. **Option 2:** Set API Keys Programmatically If you prefer to set your API keys directly within your Python script or Jupyter notebook, you can use the following commands: ```python konko.set_api_key('your_KONKO_API_KEY_here') konko.set_openai_api_key('your_OPENAI_API_KEY_here') # Optional ``` ### Calling a model Find a model on the [[Konko Introduction page](https://docs.konko.ai/docs#available-models)](https://docs.konko.ai/docs#available-models) For example, for this [[LLama 2 model](https://docs.konko.ai/docs/meta-llama-2-13b-chat)](https://docs.konko.ai/docs/meta-llama-2-13b-chat). The model id would be: `"meta-llama/Llama-2-13b-chat-hf"` Another way to find the list of models running on the Konko instance is through this [[endpoint](https://docs.konko.ai/reference/listmodels)](https://docs.konko.ai/reference/listmodels). From here, we can initialize our model: ```python chat_instance = ChatKonko(max_tokens=10, model = 'meta-llama/Llama-2-13b-chat-hf') ``` And run it: ```python msg = HumanMessage(content="Hi") chat_response = chat_instance([msg]) ```
9 months ago
def test_konko_token_streaming_test() -> None:
"""Check token streaming for ChatKonko."""
chat_instance = ChatKonko(max_tokens=10)
for token in chat_instance.stream("Just a test"):
assert isinstance(token.content, str)