This allows to adjust the number of results to retrieve and filter
documents based on metadata.
---------
Co-authored-by: Altay Sansal <altay.sansal@tgs.com>
Add a method that exposes a similarity search with corresponding
normalized similarity scores. Implement only for FAISS now.
### Motivation:
Some memory definitions combine `relevance` with other scores, like
recency , importance, etc.
While many (but not all) of the `VectorStore`'s expose a
`similarity_search_with_score` method, they don't all interpret the
units of that score (depends on the distance metric and whether or not
the the embeddings are normalized).
This PR proposes a `similarity_search_with_normalized_similarities`
method that lets consumers of the vector store not have to worry about
the metric and embedding scale.
*Most providers default to euclidean distance, with Pinecone being one
exception (defaults to cosine _similarity_).*
---------
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
The encoding fetch was out of date. Luckily OpenAI has a nice[
`encoding_for_model`](46287bfa49/tiktoken/model.py)
function in `tiktoken` we can use now.
Title, lang and description are on almost every web page, and are
incredibly useful pieces of information that currently isn't captured
with the current web base loader
I thought about adding the title and description to the content of the
document, as
that content could be useful in search, but I left it out for right now.
If you think
it'd be worth adding, happy to add it.
I've found it's nice to have the title/description in the metadata to
have some structured data
when retrieving rows from vectordbs for use with summary and source
citation, so if we do want to add it to the `page_content`, i'd advocate
for it to also be included in metadata.
Add SVM retriever class, based on
https://github.com/karpathy/randomfun/blob/master/knn_vs_svm.ipynb.
Testing still WIP, but the logic is correct (I have a local
implementation outside of Langchain working).
---------
Co-authored-by: Lance Martin <122662504+PineappleExpress808@users.noreply.github.com>
Co-authored-by: rlm <31treehaus@31s-MacBook-Pro.local>
Same as similarity_search, allows child classes to add vector
store-specific args (this was technically already happening in couple
places but now typing is correct).
Minor cosmetic changes
- Activeloop environment cred authentication in notebooks with
`getpass.getpass` (instead of CLI which not always works)
- much faster tests with Deep Lake pytest mode on
- Deep Lake kwargs pass
Notes
- I put pytest environment creds inside `vectorstores/conftest.py`, but
feel free to suggest a better location. For context, if I put in
`test_deeplake.py`, `ruff` doesn't let me to set them before import
deeplake
---------
Co-authored-by: Davit Buniatyan <d@activeloop.ai>
Note to self: Always run integration tests, even on "that last minute
change you thought would be safe" :)
---------
Co-authored-by: Mike Lambert <mike.lambert@anthropic.com>
**About**
Specify encoding to avoid UnicodeDecodeError when reading .txt for users
who are following the tutorial.
**Reference**
```
return codecs.charmap_decode(input,self.errors,decoding_table)[0]
UnicodeDecodeError: 'charmap' codec can't decode byte 0x9d in position 1205: character maps to <undefined>
```
**Environment**
OS: Win 11
Python: 3.8
* Adds an Anthropic ChatModel
* Factors out common code in our LLMModel and ChatModel
* Supports streaming llm-tokens to the callbacks on a delta basis (until
a future V2 API does that for us)
* Some fixes
Allows users to specify what files should be loaded instead of
indiscriminately loading the entire repo.
extends #2851
NOTE: for reviewers, `hide whitespace` option recommended since I
changed the indentation of an if-block to use `continue` instead so it
looks less like a Christmas tree :)
Mentioned the idea here initially:
https://github.com/hwchase17/langchain/pull/2106#issuecomment-1487509106
Since there have been dialect-specific issues, we should use
dialect-specific prompts. This way, each prompt can be separately
modified to best suit each dialect as needed. This adds a prompt for
each dialect supported in sqlalchemy (mssql, mysql, mariadb, postgres,
oracle, sqlite). For this initial implementation, the only differencse
between the prompts is the instruction for the clause to use to limit
the number of rows queried for, and the instruction for wrapping column
names using each dialect's identifier quote character.
Optimization :Limit search results when k < 10
Fix issue when k > 10: Elasticsearch will return only 10 docs
[default-search-result](https://www.elastic.co/guide/en/elasticsearch/reference/current/paginate-search-results.html)
By default, searches return the top 10 matching hits
Add size parameter to the search request to limit the number of returned
results from Elasticsearch. Remove slicing of the hits list, since the
response will already contain the desired number of results.
Mendable Seach Integration is Finally here!
Hey yall,
After various requests for Mendable in Python docs, we decided to get
our hands dirty and try to implement it.
Here is a version where we implement our **floating button** that sits
on the bottom right of the screen that once triggered (via press or CMD
K) will work the same as the js langchain docs.
Super excited about this and hopefully the community will be too.
@hwchase17 will send you the admin details via dm etc. The anon_key is
fine to be public.
Let me know if you need any further customization. I added the langchain
logo to it.
Fixes linting issue from #2835
Adds a loader for Slack Exports which can be a very valuable source of
knowledge to use for internal QA bots and other use cases.
```py
# Export data from your Slack Workspace first.
from langchain.document_loaders import SLackDirectoryLoader
SLACK_WORKSPACE_URL = "https://awesome.slack.com"
loader = ("Slack_Exports", SLACK_WORKSPACE_URL)
docs = loader.load()
```
My recent pull request (#2729) neglected to update the
`reduce_openapi_spec` in spec.py to also accommodate PATCH and DELETE
added to planner.py and prompt_planner.py.
Have seen questions about whether or not the `SQLDatabaseChain` supports
more than just sqlite, which was unclear in the docs, so tried to
clarify that and how to connect to other dialects.