mirror of
https://github.com/hwchase17/langchain
synced 2024-10-29 17:07:25 +00:00
81 lines
2.4 KiB
Plaintext
81 lines
2.4 KiB
Plaintext
|
# Konko
|
||
|
This page covers how to run models on Konko within LangChain.
|
||
|
|
||
|
Konko API is a fully managed API designed to help application developers:
|
||
|
|
||
|
Select the right LLM(s) for their application
|
||
|
Prototype with various open-source and proprietary LLMs
|
||
|
Move to production in-line with their security, privacy, throughput, latency SLAs without infrastructure set-up or administration using Konko AI's SOC 2 compliant infrastructure
|
||
|
|
||
|
## Installation and Setup
|
||
|
|
||
|
### First you'll need an API key
|
||
|
You can request it by messaging [support@konko.ai](mailto:support@konko.ai)
|
||
|
|
||
|
### Install Konko AI's Python SDK
|
||
|
|
||
|
#### 1. Enable a Python3.8+ environment
|
||
|
|
||
|
#### 2. Set API Keys
|
||
|
|
||
|
##### Option 1: Set Environment Variables
|
||
|
|
||
|
1. You can set environment variables for
|
||
|
1. KONKO_API_KEY (Required)
|
||
|
2. OPENAI_API_KEY (Optional)
|
||
|
|
||
|
2. In your current shell session, use the export command:
|
||
|
|
||
|
```shell
|
||
|
export KONKO_API_KEY={your_KONKO_API_KEY_here}
|
||
|
export OPENAI_API_KEY={your_OPENAI_API_KEY_here} #Optional
|
||
|
```
|
||
|
|
||
|
Alternatively, you can add the above lines directly to your shell startup script (such as .bashrc or .bash_profile for Bash shell and .zshrc for Zsh shell) to have them set automatically every time a new shell session starts.
|
||
|
|
||
|
##### Option 2: Set API Keys Programmatically
|
||
|
|
||
|
If you prefer to set your API keys directly within your Python script or Jupyter notebook, you can use the following commands:
|
||
|
|
||
|
```python
|
||
|
konko.set_api_key('your_KONKO_API_KEY_here')
|
||
|
konko.set_openai_api_key('your_OPENAI_API_KEY_here') # Optional
|
||
|
```
|
||
|
|
||
|
#### 3. Install the SDK
|
||
|
|
||
|
|
||
|
```shell
|
||
|
pip install konko
|
||
|
```
|
||
|
|
||
|
#### 4. Verify Installation & Authentication
|
||
|
|
||
|
```python
|
||
|
#Confirm konko has installed successfully
|
||
|
import konko
|
||
|
#Confirm API keys from Konko and OpenAI are set properly
|
||
|
konko.Model.list()
|
||
|
```
|
||
|
|
||
|
## Calling a model
|
||
|
|
||
|
Find a model on the [Konko Introduction page](https://docs.konko.ai/docs#available-models)
|
||
|
|
||
|
For example, for this [LLama 2 model](https://docs.konko.ai/docs/meta-llama-2-13b-chat). The model id would be: `"meta-llama/Llama-2-13b-chat-hf"`
|
||
|
|
||
|
Another way to find the list of models running on the Konko instance is through this [endpoint](https://docs.konko.ai/reference/listmodels).
|
||
|
|
||
|
From here, we can initialize our model:
|
||
|
|
||
|
```python
|
||
|
chat_instance = ChatKonko(max_tokens=10, model = 'meta-llama/Llama-2-13b-chat-hf')
|
||
|
```
|
||
|
|
||
|
And run it:
|
||
|
|
||
|
```python
|
||
|
msg = HumanMessage(content="Hi")
|
||
|
chat_response = chat_instance([msg])
|
||
|
```
|