As of the recent PR at #9043, after some testing we've realised that the
default values were not being used for `api_key` and `api_url`. Besides
that, the default for `api_key` was set to `argilla.apikey`, but since
the default values are intended for people using the Argilla Quickstart
(easy to run and setup), the defaults should be instead `owner.apikey`
if using Argilla 1.11.0 or higher, or `admin.apikey` if using a lower
version of Argilla.
Additionally, we've removed the f-string replacements from the
docstrings.
---------
Co-authored-by: Gabriel Martin <gabriel@argilla.io>
This MR corrects the IndexError arising in prep_prompts method when no
documents are returned from a similarity search.
Fixes#1733
Co-authored-by: Sam Groenjes <sam.groenjes@darkwolfsolutions.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
In second section it looks like a copy/paste from the first section and
doesn't include the specific embedding model mentioned in the example so
I added it for clarity.
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
### Description:
`ConversationBufferTokenMemory` should have a simple way of returning
the conversation messages as a string.
Previously to complete this, you would only have the option to return
memory as an array through the buffer method and call
`get_buffer_string` by importing it from `langchain.schema`, or use the
`load_memory_variables` method and key into `self.memory_key`.
### Maintainer
@hwchase17
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
Now that we accept any runnable or arbitrary function to evaluate, we
don't always look up the input keys. If an evaluator requires
references, we should try to infer if there's one key present. We only
have delayed validation here but it's better than nothing
The table creation process in these examples commands do not match what
the recently updated functions in these example commands is looking for.
This change updates the type in the table creation command.
Issue Number for my report of the doc problem #7446
@rlancemartin and @eyurtsev I believe this is your area
Twitter: @j1philli
Co-authored-by: Bagatur <baskaryan@gmail.com>
- **Description**: [BagelDB](bageldb.ai) a collaborative vector
database. Integrated the bageldb PyPi package with langchain with
related tests and code.
- **Issue**: Not applicable.
- **Dependencies**: `betabageldb` PyPi package.
- **Tag maintainer**: @rlancemartin, @eyurtsev, @baskaryan
- **Twitter handle**: bageldb_ai (https://twitter.com/BagelDB_ai)
We ran `make format`, `make lint` and `make test` locally.
Followed the contribution guideline thoroughly
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
---------
Co-authored-by: Towhid1 <nurulaktertowhid@gmail.com>
Description: updated BabyAGI examples and experimental to append the
iteration to the result id to fix error storing data to vectorstore.
Issue: 7445
Dependencies: no
Tag maintainer: @eyurtsev
This fix worked for me locally. Happy to take some feedback and iterate
on a better solution. I was considering appending a uuid instead but
didn't want to over complicate the example.
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
Add convenience methods to `ConversationBufferMemory` and
`ConversationBufferWindowMemory` to get buffer either as messages or as
string.
Helps when `return_messages` is set to `True` but you want access to the
messages as a string, and vice versa.
@hwchase17
One use case: Using a `MultiPromptRouter` where `default_chain` is
`ConversationChain`, but destination chains are `LLMChains`. Injecting
chat memory into prompts for destination chains prints a stringified
`List[Messages]` in the prompt, which creates a lot of noise. These
convenience methods allow caller to choose either as needed.
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
Description: Due to some issue on the test, this is a separate PR with
the test for #8502
Tag maintainer: @rlancemartin
---------
Co-authored-by: Lance Martin <lance@langchain.dev>
Co-authored-by: Bagatur <baskaryan@gmail.com>
Current regex only extracts agent's action between '` ``` ``` `', this
commit will extract action between both '` ```json ``` `' and '` ``` ```
`'
This is very similar to #7511
Co-authored-by: zjl <junlinzhou@yzbigdata.com>
## Description
This PR adds the `aembed_query` and `aembed_documents` async methods for
improving the embeddings generation for large documents. The
implementation uses asyncio tasks and gather to achieve concurrency as
there is no bedrock async API in boto3.
### Maintainers
@agola11
@aarora79
### Open questions
To avoid throttling from the Bedrock API, should there be an option to
limit the concurrency of the calls?
I was initially confused weather to use create_vectorstore_agent or
create_vectorstore_router_agent due to lack of documentation so I
created a simple documentation for each of the function about their
different usecase.
Replace this comment with:
- Description: Added the doc_strings in create_vectorstore_agent and
create_vectorstore_router_agent to point out the difference in their
usecase
- Tag maintainer: @rlancemartin, @eyurtsev
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
Hi @agola11, or whoever is reviewing this PR 😄
## What's in this PR?
As of the latest Argilla release, we'll change and refactor some things
to make some workflows easier, one of those is how everything's pushed
to Argilla, so that now there's no need to call `push_to_argilla` over a
`FeedbackDataset` when either `push_to_argilla` is called for the first
time, or `from_argilla` is called; among others.
We also add some class variables to make sure those are easy to update
in case we update those internally in the future, also to make the
`warnings.warn` message lighter from the code view.
P.S. Regarding the Twitter/X mention feel free to do so at either
https://twitter.com/argilla_io or https://twitter.com/alvarobartt, or
both if applicable, otherwise, just the first Twitter/X handle.
## Description:
This PR adds the Titan Takeoff Server to the available LLMs in
LangChain.
Titan Takeoff is an inference server created by
[TitanML](https://www.titanml.co/) that allows you to deploy large
language models locally on your hardware in a single command. Most
generative model architectures are included, such as Falcon, Llama 2,
GPT2, T5 and many more.
Read more about Titan Takeoff here:
-
[Blog](https://medium.com/@TitanML/introducing-titan-takeoff-6c30e55a8e1e)
- [Docs](https://docs.titanml.co/docs/titan-takeoff/getting-started)
#### Testing
As Titan Takeoff runs locally on port 8000 by default, no network access
is needed. Responses are mocked for testing.
- [x] Make Lint
- [x] Make Format
- [x] Make Test
#### Dependencies
No new dependencies are introduced. However, users will need to install
the titan-iris package in their local environment and start the Titan
Takeoff inferencing server in order to use the Titan Takeoff
integration.
Thanks for your help and please let me know if you have any questions.
cc: @hwchase17 @baskaryan
Expressing gratitude to the creator for crafting this remarkable
application. 🙌, Would like to Enhance grammar and spelling in the
documentation for a polished reader experience.
Your feedback is valuable as always
@baskaryan , @hwchase17 , @eyurtsev
- Description: Fixes an issue with Metaphor Search Tool throwing when
missing keys in API response.
- Issue: #9048
- Tag maintainer: @hinthornw @hwchase17
- Twitter handle: @pelaseyed
This PR adds the ability to temporarily cache or persistently store
embeddings.
A notebook has been included showing how to set up the cache and how to
use it with a vectorstore.