## Description:
I've integrated CTranslate2 with LangChain. CTranlate2 is a recently
popular library for efficient inference with Transformer models that
compares favorably to alternatives such as HF Text Generation Inference
and vLLM in
[benchmarks](https://hamel.dev/notes/llm/inference/03_inference.html).
Hi @baskaryan,
I've made updates to LLMonitorCallbackHandler to address a few bugs
reported by users
These changes don't alter the fundamental behavior of the callback
handler.
Thanks you!
---------
Co-authored-by: vincelwt <vince@lyser.io>
_Thank you to the LangChain team for the great project and in advance
for your review. Let me know if I can provide any other additional
information or do things differently in the future to make your lives
easier 🙏 _
@hwchase17 please let me know if you're not the right person to review 😄
This PR enables LangChain to access the Konko API via the chat_models
API wrapper.
Konko API is a fully managed API designed to help application
developers:
1. Select the right LLM(s) for their application
2. Prototype with various open-source and proprietary LLMs
3. Move to production in-line with their security, privacy, throughput,
latency SLAs without infrastructure set-up or administration using Konko
AI's SOC 2 compliant infrastructure
_Note on integration tests:_
We added 14 integration tests. They will all fail unless you export the
right API keys. 13 will pass with a KONKO_API_KEY provided and the other
one will pass with a OPENAI_API_KEY provided. When both are provided,
all 14 integration tests pass. If you would like to test this yourself,
please let me know and I can provide some temporary keys.
### Installation and Setup
1. **First you'll need an API key**
2. **Install Konko AI's Python SDK**
1. Enable a Python3.8+ environment
`pip install konko`
3. **Set API Keys**
**Option 1:** Set Environment Variables
You can set environment variables for
1. KONKO_API_KEY (Required)
2. OPENAI_API_KEY (Optional)
In your current shell session, use the export command:
`export KONKO_API_KEY={your_KONKO_API_KEY_here}`
`export OPENAI_API_KEY={your_OPENAI_API_KEY_here} #Optional`
Alternatively, you can add the above lines directly to your shell
startup script (such as .bashrc or .bash_profile for Bash shell and
.zshrc for Zsh shell) to have them set automatically every time a new
shell session starts.
**Option 2:** Set API Keys Programmatically
If you prefer to set your API keys directly within your Python script or
Jupyter notebook, you can use the following commands:
```python
konko.set_api_key('your_KONKO_API_KEY_here')
konko.set_openai_api_key('your_OPENAI_API_KEY_here') # Optional
```
### Calling a model
Find a model on the [[Konko Introduction
page](https://docs.konko.ai/docs#available-models)](https://docs.konko.ai/docs#available-models)
For example, for this [[LLama 2
model](https://docs.konko.ai/docs/meta-llama-2-13b-chat)](https://docs.konko.ai/docs/meta-llama-2-13b-chat).
The model id would be: `"meta-llama/Llama-2-13b-chat-hf"`
Another way to find the list of models running on the Konko instance is
through this
[[endpoint](https://docs.konko.ai/reference/listmodels)](https://docs.konko.ai/reference/listmodels).
From here, we can initialize our model:
```python
chat_instance = ChatKonko(max_tokens=10, model = 'meta-llama/Llama-2-13b-chat-hf')
```
And run it:
```python
msg = HumanMessage(content="Hi")
chat_response = chat_instance([msg])
```
Changes in:
- `create_sql_agent` function so that user can easily add custom tools
as complement for the toolkit.
- updating **sql use case** notebook to showcase 2 examples of extra
tools.
Motivation for these changes is having the possibility of including
domain expert knowledge to the agent, which improves accuracy and
reduces time/tokens.
---------
Co-authored-by: Manuel Soria <manuel.soria@greyscaleai.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
- Implemented the MilvusTranslator for self-querying using Milvus vector
store
- Made unit tests to test its functionality
- Documented the Milvus self-querying
- Description: this PR adds the possibility to configure boto3 in the S3
loaders. Any named argument you add will be used to create the Boto3
session. This is useful when the AWS credentials can't be passed as env
variables or can't be read from the credentials file.
- Issue: N/A
- Dependencies: N/A
- Tag maintainer: ?
- Twitter handle: cbornet_
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
Various improvements to the Model I/O section of the documentation
- Changed "Chat Model" to "chat model" in a few spots for internal
consistency
- Minor spelling & grammar fixes to improve readability & comprehension
Hi,
I noticed a typo in the local_llms.ipynb file and fixed it. The word
challenge is without 'a' in the original file.
@baskaryan , @eyurtsev
Thanks.
Co-authored-by: Fliprise <fliprise@Fliprises-MacBook-Pro.local>
Various miscellaneous fixes to most pages in the 'Retrievers' section of
the documentation:
- "VectorStore" and "vectorstore" changed to "vector store" for
consistency
- Various spelling, grammar, and formatting improvements for readability
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Enhance SerpApi response which potential to have more relevant output.
<img width="345" alt="Screenshot 2023-09-01 at 8 26 13 AM"
src="https://github.com/langchain-ai/langchain/assets/10222402/80ff684d-e02e-4143-b218-5c1b102cbf75">
Query: What is the weather in Pomfret?
**Before:**
> I should look up the current weather conditions.
...
Final Answer: The current weather in Pomfret is 73°F with 1% chance of
precipitation and winds at 10 mph.
**After:**
> I should look up the current weather conditions.
...
Final Answer: The current weather in Pomfret is 62°F, 1% precipitation,
61% humidity, and 4 mph wind.
---
Query: Top team in english premier league?
**Before:**
> I need to find out which team is currently at the top of the English
Premier League
...
Final Answer: Liverpool FC is currently at the top of the English
Premier League.
**After:**
> I need to find out which team is currently at the top of the English
Premier League
...
Final Answer: Man City is currently at the top of the English Premier
League.
---
Query: Top team in english premier league?
**Before:**
> I need to find out which team is currently at the top of the English
Premier League
...
Final Answer: Liverpool FC is currently at the top of the English
Premier League.
**After:**
> I need to find out which team is currently at the top of the English
Premier League
...
Final Answer: Man City is currently at the top of the English Premier
League.
---
Query: Any upcoming events in Paris?
**Before:**
> I should look for events in Paris
Action: Search
...
Final Answer: Upcoming events in Paris this month include Whit Sunday &
Whit Monday (French National Holiday), Makeup in Paris, Paris Jazz
Festival, Fete de la Musique, and Salon International de la Maison de.
**After:**
> I should look for events in Paris
Action: Search
...
Final Answer: Upcoming events in Paris include Elektric Park 2023, The
Aces, and BEING AS AN OCEAN.
### Description
There is a really nice class for saving chat messages into a database -
SQLChatMessageHistory.
It leverages SqlAlchemy to be compatible with any supported database (in
contrast with PostgresChatMessageHistory, which is basically the same
but is limited to Postgres).
However, the class is not really customizable in terms of what you can
store. I can imagine a lot of use cases, when one will need to save a
message date, along with some additional metadata.
To solve this, I propose to extract the converting logic from
BaseMessage to SQLAlchemy model (and vice versa) into a separate class -
message converter. So instead of rewriting the whole
SQLChatMessageHistory class, a user will only need to write a custom
model and a simple mapping class, and pass its instance as a parameter.
I also noticed that there is no documentation on this class, so I added
that too, with an example of custom message converter.
### Issue
N/A
### Dependencies
N/A
### Tag maintainer
Not yet
### Twitter handle
N/A
- Description: Adds two optional parameters to the
DynamoDBChatMessageHistory class to enable users to pass in a name for
their PrimaryKey, or a Key object itself to enable the use of composite
keys, a common DynamoDB paradigm.
[AWS DynamoDB Key
docs](https://aws.amazon.com/blogs/database/choosing-the-right-dynamodb-partition-key/)
- Issue: N/A
- Dependencies: N/A
- Twitter handle: N/A
---------
Co-authored-by: Josh White <josh@ctrlstack.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
## Description
This PR introduces a minor change to the TitanTakeoff integration.
Instead of specifying a port on localhost, this PR will allow users to
specify a baseURL instead. This will allow users to use the integration
if they have TitanTakeoff deployed externally (not on localhost). This
removes the hardcoded reference to localhost "http://localhost:{port}".
### Info about Titan Takeoff
Titan Takeoff is an inference server created by
[TitanML](https://www.titanml.co/) that allows you to deploy large
language models locally on your hardware in a single command. Most
generative model architectures are included, such as Falcon, Llama 2,
GPT2, T5 and many more.
Read more about Titan Takeoff here:
-
[Blog](https://medium.com/@TitanML/introducing-titan-takeoff-6c30e55a8e1e)
- [Docs](https://docs.titanml.co/docs/titan-takeoff/getting-started)
### Dependencies
No new dependencies are introduced. However, users will need to install
the titan-iris package in their local environment and start the Titan
Takeoff inferencing server in order to use the Titan Takeoff
integration.
Thanks for your help and please let me know if you have any questions.
cc: @hwchase17 @baskaryan
---------
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
The current document has not mentioned that splits larger than chunk
size would happen. I update the related document and explain why it
happens and how to solve it.
related issue #1349#3838#2140
- Description: Added example of running Q&A over structured data using
the `Airbyte` loaders and `pandas`
- Dependencies: any dependencies required for this change,
- Tag maintainer: @hwchase17
- Twitter handle: @pelaseyed
Hi,
this PR contains loader / parser for Azure Document intelligence which
is a ML-based service to ingest arbitrary PDFs / images, even if
scanned. The loader generates Documents by pages of the original
document. This is my first contribution to LangChain.
Unfortunately I could not find the correct place for test cases. Happy
to add one if you can point me to the location, but as this is a
cloud-based service, a test would require network access and credentials
- so might be of limited help.
Dependencies: The needed dependency was already part of pyproject.toml,
no change.
Twitter: feel free to mention @LarsAC on the announcement
Fixed navbar:
- renamed several files, so ToC is sorted correctly
- made ToC items consistent: formatted several Titles
- added several links
- reformatted several docs to a consistent format
- renamed several files (removed `_example` suffix)
- added renamed files to the `docs/docs_skeleton/vercel.json`
This notebook was mistakenly placed in the `toolkits` folder and appears
within `Agents & Toolkits` menu. But it should be in `Tools`.
Moved example into `tools/`; updated title to consistent format.