mirror of
https://github.com/hwchase17/langchain
synced 2024-10-29 17:07:25 +00:00
c6b27b3692
_Thank you to the LangChain team for the great project and in advance for your review. Let me know if I can provide any other additional information or do things differently in the future to make your lives easier 🙏 _ @hwchase17 please let me know if you're not the right person to review 😄 This PR enables LangChain to access the Konko API via the chat_models API wrapper. Konko API is a fully managed API designed to help application developers: 1. Select the right LLM(s) for their application 2. Prototype with various open-source and proprietary LLMs 3. Move to production in-line with their security, privacy, throughput, latency SLAs without infrastructure set-up or administration using Konko AI's SOC 2 compliant infrastructure _Note on integration tests:_ We added 14 integration tests. They will all fail unless you export the right API keys. 13 will pass with a KONKO_API_KEY provided and the other one will pass with a OPENAI_API_KEY provided. When both are provided, all 14 integration tests pass. If you would like to test this yourself, please let me know and I can provide some temporary keys. ### Installation and Setup 1. **First you'll need an API key** 2. **Install Konko AI's Python SDK** 1. Enable a Python3.8+ environment `pip install konko` 3. **Set API Keys** **Option 1:** Set Environment Variables You can set environment variables for 1. KONKO_API_KEY (Required) 2. OPENAI_API_KEY (Optional) In your current shell session, use the export command: `export KONKO_API_KEY={your_KONKO_API_KEY_here}` `export OPENAI_API_KEY={your_OPENAI_API_KEY_here} #Optional` Alternatively, you can add the above lines directly to your shell startup script (such as .bashrc or .bash_profile for Bash shell and .zshrc for Zsh shell) to have them set automatically every time a new shell session starts. **Option 2:** Set API Keys Programmatically If you prefer to set your API keys directly within your Python script or Jupyter notebook, you can use the following commands: ```python konko.set_api_key('your_KONKO_API_KEY_here') konko.set_openai_api_key('your_OPENAI_API_KEY_here') # Optional ``` ### Calling a model Find a model on the [[Konko Introduction page](https://docs.konko.ai/docs#available-models)](https://docs.konko.ai/docs#available-models) For example, for this [[LLama 2 model](https://docs.konko.ai/docs/meta-llama-2-13b-chat)](https://docs.konko.ai/docs/meta-llama-2-13b-chat). The model id would be: `"meta-llama/Llama-2-13b-chat-hf"` Another way to find the list of models running on the Konko instance is through this [[endpoint](https://docs.konko.ai/reference/listmodels)](https://docs.konko.ai/reference/listmodels). From here, we can initialize our model: ```python chat_instance = ChatKonko(max_tokens=10, model = 'meta-llama/Llama-2-13b-chat-hf') ``` And run it: ```python msg = HumanMessage(content="Hi") chat_response = chat_instance([msg]) ```
81 lines
2.4 KiB
Plaintext
81 lines
2.4 KiB
Plaintext
# Konko
|
|
This page covers how to run models on Konko within LangChain.
|
|
|
|
Konko API is a fully managed API designed to help application developers:
|
|
|
|
Select the right LLM(s) for their application
|
|
Prototype with various open-source and proprietary LLMs
|
|
Move to production in-line with their security, privacy, throughput, latency SLAs without infrastructure set-up or administration using Konko AI's SOC 2 compliant infrastructure
|
|
|
|
## Installation and Setup
|
|
|
|
### First you'll need an API key
|
|
You can request it by messaging [support@konko.ai](mailto:support@konko.ai)
|
|
|
|
### Install Konko AI's Python SDK
|
|
|
|
#### 1. Enable a Python3.8+ environment
|
|
|
|
#### 2. Set API Keys
|
|
|
|
##### Option 1: Set Environment Variables
|
|
|
|
1. You can set environment variables for
|
|
1. KONKO_API_KEY (Required)
|
|
2. OPENAI_API_KEY (Optional)
|
|
|
|
2. In your current shell session, use the export command:
|
|
|
|
```shell
|
|
export KONKO_API_KEY={your_KONKO_API_KEY_here}
|
|
export OPENAI_API_KEY={your_OPENAI_API_KEY_here} #Optional
|
|
```
|
|
|
|
Alternatively, you can add the above lines directly to your shell startup script (such as .bashrc or .bash_profile for Bash shell and .zshrc for Zsh shell) to have them set automatically every time a new shell session starts.
|
|
|
|
##### Option 2: Set API Keys Programmatically
|
|
|
|
If you prefer to set your API keys directly within your Python script or Jupyter notebook, you can use the following commands:
|
|
|
|
```python
|
|
konko.set_api_key('your_KONKO_API_KEY_here')
|
|
konko.set_openai_api_key('your_OPENAI_API_KEY_here') # Optional
|
|
```
|
|
|
|
#### 3. Install the SDK
|
|
|
|
|
|
```shell
|
|
pip install konko
|
|
```
|
|
|
|
#### 4. Verify Installation & Authentication
|
|
|
|
```python
|
|
#Confirm konko has installed successfully
|
|
import konko
|
|
#Confirm API keys from Konko and OpenAI are set properly
|
|
konko.Model.list()
|
|
```
|
|
|
|
## Calling a model
|
|
|
|
Find a model on the [Konko Introduction page](https://docs.konko.ai/docs#available-models)
|
|
|
|
For example, for this [LLama 2 model](https://docs.konko.ai/docs/meta-llama-2-13b-chat). The model id would be: `"meta-llama/Llama-2-13b-chat-hf"`
|
|
|
|
Another way to find the list of models running on the Konko instance is through this [endpoint](https://docs.konko.ai/reference/listmodels).
|
|
|
|
From here, we can initialize our model:
|
|
|
|
```python
|
|
chat_instance = ChatKonko(max_tokens=10, model = 'meta-llama/Llama-2-13b-chat-hf')
|
|
```
|
|
|
|
And run it:
|
|
|
|
```python
|
|
msg = HumanMessage(content="Hi")
|
|
chat_response = chat_instance([msg])
|
|
```
|