Azure Cognitive Search: Custom index and scoring profile support (#6843)

Description: Adding support for custom index and scoring profile support
in Azure Cognitive Search
@hwchase17

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
pull/8333/head
Fabrizio Ruocco 1 year ago committed by GitHub
parent ed24de8467
commit ddc353a768
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

@ -6,14 +6,14 @@
"source": [
"# Azure Cognitive Search\n",
"\n",
">[Azure Cognitive Search](https://learn.microsoft.com/en-us/azure/search/search-what-is-azure-search) (formerly known as `Azure Search`) is a cloud search service that gives developers infrastructure, APIs, and tools for building a rich search experience over private, heterogeneous content in web, mobile, and enterprise applications.\n"
"[Azure Cognitive Search](https://learn.microsoft.com/azure/search/search-what-is-azure-search) (formerly known as `Azure Search`) is a cloud search service that gives developers infrastructure, APIs, and tools for building a rich search experience over private, heterogeneous content in web, mobile, and enterprise applications.\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Install Azure Cognitive Search SDK"
"# Install Azure Cognitive Search SDK"
]
},
{
@ -22,11 +22,12 @@
"metadata": {},
"outputs": [],
"source": [
"!pip install --index-url=https://pkgs.dev.azure.com/azure-sdk/public/_packaging/azure-sdk-for-python/pypi/simple/ azure-search-documents==11.4.0a20230509004\n",
"!pip install azure-search-documents==11.4.0b6\n",
"!pip install azure-identity"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
@ -39,14 +40,14 @@
"metadata": {},
"outputs": [],
"source": [
"import os, json\n",
"import openai\n",
"from dotenv import load_dotenv\n",
"import os\n",
"from langchain.embeddings.openai import OpenAIEmbeddings\n",
"from langchain.vectorstores.azuresearch import AzureSearch"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
@ -60,13 +61,10 @@
"metadata": {},
"outputs": [],
"source": [
"# Load environment variables from a .env file using load_dotenv():\n",
"load_dotenv()\n",
"\n",
"openai.api_type = \"azure\"\n",
"openai.api_base = \"YOUR_OPENAI_ENDPOINT\"\n",
"openai.api_version = \"2023-05-15\"\n",
"openai.api_key = \"YOUR_OPENAI_API_KEY\"\n",
"os.environ[\"OPENAI_API_TYPE\"] = \"azure\"\n",
"os.environ[\"OPENAI_API_BASE\"] = \"YOUR_OPENAI_ENDPOINT\"\n",
"os.environ[\"OPENAI_API_KEY\"] = \"YOUR_OPENAI_API_KEY\"\n",
"os.environ[\"OPENAI_API_VERSION\"] = \"2023-05-15\"\n",
"model: str = \"text-embedding-ada-002\""
]
},
@ -81,13 +79,12 @@
},
{
"cell_type": "code",
"execution_count": 10,
"execution_count": 9,
"metadata": {},
"outputs": [],
"source": [
"vector_store_address: str = \"YOUR_AZURE_SEARCH_ENDPOINT\"\n",
"vector_store_password: str = \"YOUR_AZURE_SEARCH_ADMIN_KEY\"\n",
"index_name: str = \"langchain-vector-demo\""
"vector_store_password: str = \"YOUR_AZURE_SEARCH_ADMIN_KEY\""
]
},
{
@ -101,11 +98,12 @@
},
{
"cell_type": "code",
"execution_count": 11,
"execution_count": 10,
"metadata": {},
"outputs": [],
"source": [
"embeddings: OpenAIEmbeddings = OpenAIEmbeddings(model=model, chunk_size=1)\n",
"embeddings: OpenAIEmbeddings = OpenAIEmbeddings(deployment=model, chunk_size=1)\n",
"index_name: str = \"langchain-vector-demo\"\n",
"vector_store: AzureSearch = AzureSearch(\n",
" azure_search_endpoint=vector_store_address,\n",
" azure_search_key=vector_store_password,\n",
@ -125,7 +123,7 @@
},
{
"cell_type": "code",
"execution_count": 12,
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
@ -142,6 +140,7 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
@ -152,7 +151,7 @@
},
{
"cell_type": "code",
"execution_count": 13,
"execution_count": 15,
"metadata": {},
"outputs": [
{
@ -180,17 +179,18 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Perform a Hybrid Search\n",
"\n",
"Execute hybrid search using the hybrid_search() method:"
"Execute hybrid search using the search_type or hybrid_search() method:"
]
},
{
"cell_type": "code",
"execution_count": 14,
"execution_count": null,
"metadata": {},
"outputs": [
{
@ -210,15 +210,358 @@
"source": [
"# Perform a hybrid search\n",
"docs = vector_store.similarity_search(\n",
" query=\"What did the president say about Ketanji Brown Jackson\", k=3\n",
" query=\"What did the president say about Ketanji Brown Jackson\",\n",
" k=3, \n",
" search_type=\"hybrid\"\n",
")\n",
"print(docs[0].page_content)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while youre at it, pass the Disclose Act so Americans can know who is funding our elections. \n",
"\n",
"Tonight, Id like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n",
"\n",
"One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n",
"\n",
"And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nations top legal minds, who will continue Justice Breyers legacy of excellence.\n"
]
}
],
"source": [
"# Perform a hybrid search\n",
"docs = vector_store.hybrid_search(\n",
" query=\"What did the president say about Ketanji Brown Jackson\", \n",
" k=3\n",
")\n",
"print(docs[0].page_content)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Create a new index with custom filterable fields "
]
},
{
"cell_type": "code",
"execution_count": 16,
"metadata": {},
"outputs": [],
"source": [
"from azure.search.documents.indexes.models import (\n",
" SearchableField,\n",
" SearchField,\n",
" SearchFieldDataType,\n",
" SimpleField,\n",
" ScoringProfile,\n",
" TextWeights,\n",
")\n",
"\n",
"embeddings: OpenAIEmbeddings = OpenAIEmbeddings(deployment=model, chunk_size=1)\n",
"embedding_function = embeddings.embed_query\n",
"\n",
"fields = [\n",
" SimpleField(\n",
" name=\"id\",\n",
" type=SearchFieldDataType.String,\n",
" key=True,\n",
" filterable=True,\n",
" ),\n",
" SearchableField(\n",
" name=\"content\",\n",
" type=SearchFieldDataType.String,\n",
" searchable=True,\n",
" ),\n",
" SearchField(\n",
" name=\"content_vector\",\n",
" type=SearchFieldDataType.Collection(SearchFieldDataType.Single),\n",
" searchable=True,\n",
" vector_search_dimensions=len(embedding_function(\"Text\")),\n",
" vector_search_configuration=\"default\",\n",
" ),\n",
" SearchableField(\n",
" name=\"metadata\",\n",
" type=SearchFieldDataType.String,\n",
" searchable=True,\n",
" ),\n",
" # Additional field to store the title\n",
" SearchableField(\n",
" name=\"title\",\n",
" type=SearchFieldDataType.String,\n",
" searchable=True,\n",
" ),\n",
" # Additional field for filtering on document source\n",
" SimpleField(\n",
" name=\"source\",\n",
" type=SearchFieldDataType.String,\n",
" filterable=True,\n",
" ),\n",
"]\n",
"\n",
"index_name: str = \"langchain-vector-demo-custom\"\n",
"\n",
"vector_store: AzureSearch = AzureSearch(\n",
" azure_search_endpoint=vector_store_address,\n",
" azure_search_key=vector_store_password,\n",
" index_name=index_name,\n",
" embedding_function=embedding_function,\n",
" fields=fields,\n",
")\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Perform a query with a custom filter"
]
},
{
"cell_type": "code",
"execution_count": 17,
"metadata": {},
"outputs": [],
"source": [
"# Data in the metadata dictionary with a corresponding field in the index will be added to the index\n",
"# In this example, the metadata dictionary contains a title, a source and a random field\n",
"# The title and the source will be added to the index as separate fields, but the random won't. (as it is not defined in the fields list)\n",
"# The random field will be only stored in the metadata field\n",
"vector_store.add_texts(\n",
" [\"Test 1\", \"Test 2\", \"Test 3\"],\n",
" [\n",
" {\"title\": \"Title 1\", \"source\": \"A\", \"random\": \"10290\"},\n",
" {\"title\": \"Title 2\", \"source\": \"A\", \"random\": \"48392\"},\n",
" {\"title\": \"Title 3\", \"source\": \"B\", \"random\": \"32893\"},\n",
" ],\n",
")\n"
]
},
{
"cell_type": "code",
"execution_count": 18,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[Document(page_content='Test 3', metadata={'title': 'Title 3', 'source': 'B', 'random': '32893'}),\n",
" Document(page_content='Test 1', metadata={'title': 'Title 1', 'source': 'A', 'random': '10290'}),\n",
" Document(page_content='Test 2', metadata={'title': 'Title 2', 'source': 'A', 'random': '48392'})]"
]
},
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"res = vector_store.similarity_search(query=\"Test 3 source1\", k=3, search_type=\"hybrid\")\n",
"res"
]
},
{
"cell_type": "code",
"execution_count": 19,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[Document(page_content='Test 1', metadata={'title': 'Title 1', 'source': 'A', 'random': '10290'}),\n",
" Document(page_content='Test 2', metadata={'title': 'Title 2', 'source': 'A', 'random': '48392'})]"
]
},
"execution_count": 13,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"res = vector_store.similarity_search(query=\"Test 3 source1\", k=3, search_type=\"hybrid\", filters=\"source eq 'A'\")\n",
"res"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Create a new index with a Scoring Profile"
]
},
{
"cell_type": "code",
"execution_count": 20,
"metadata": {},
"outputs": [],
"source": [
"from azure.search.documents.indexes.models import (\n",
" SearchableField,\n",
" SearchField,\n",
" SearchFieldDataType,\n",
" SimpleField,\n",
" ScoringProfile,\n",
" TextWeights,\n",
" ScoringFunction,\n",
" FreshnessScoringFunction,\n",
" FreshnessScoringParameters\n",
")\n",
"\n",
"embeddings: OpenAIEmbeddings = OpenAIEmbeddings(deployment=model, chunk_size=1)\n",
"embedding_function = embeddings.embed_query\n",
"\n",
"fields = [\n",
" SimpleField(\n",
" name=\"id\",\n",
" type=SearchFieldDataType.String,\n",
" key=True,\n",
" filterable=True,\n",
" ),\n",
" SearchableField(\n",
" name=\"content\",\n",
" type=SearchFieldDataType.String,\n",
" searchable=True,\n",
" ),\n",
" SearchField(\n",
" name=\"content_vector\",\n",
" type=SearchFieldDataType.Collection(SearchFieldDataType.Single),\n",
" searchable=True,\n",
" vector_search_dimensions=len(embedding_function(\"Text\")),\n",
" vector_search_configuration=\"default\",\n",
" ),\n",
" SearchableField(\n",
" name=\"metadata\",\n",
" type=SearchFieldDataType.String,\n",
" searchable=True,\n",
" ),\n",
" # Additional field to store the title\n",
" SearchableField(\n",
" name=\"title\",\n",
" type=SearchFieldDataType.String,\n",
" searchable=True,\n",
" ),\n",
" # Additional field for filtering on document source\n",
" SimpleField(\n",
" name=\"source\",\n",
" type=SearchFieldDataType.String,\n",
" filterable=True,\n",
" ),\n",
" # Additional data field for last doc update\n",
" SimpleField(\n",
" name=\"last_update\",\n",
" type=SearchFieldDataType.DateTimeOffset,\n",
" searchable=True,\n",
" filterable=True\n",
" )\n",
"]\n",
"# Adding a custom scoring profile with a freshness function\n",
"sc_name = \"scoring_profile\"\n",
"sc = ScoringProfile(\n",
" name=sc_name,\n",
" text_weights=TextWeights(weights={\"title\": 5}),\n",
" function_aggregation=\"sum\",\n",
" functions=[\n",
" FreshnessScoringFunction(\n",
" field_name=\"last_update\",\n",
" boost=100,\n",
" parameters=FreshnessScoringParameters(boosting_duration=\"P2D\"),\n",
" interpolation=\"linear\"\n",
" )\n",
" ]\n",
")\n",
"\n",
"index_name = \"langchain-vector-demo-custom-scoring-profile\"\n",
"\n",
"vector_store: AzureSearch = AzureSearch(\n",
" azure_search_endpoint=vector_store_address,\n",
" azure_search_key=vector_store_password,\n",
" index_name=index_name,\n",
" embedding_function=embeddings.embed_query,\n",
" fields=fields,\n",
" scoring_profiles = [sc],\n",
" default_scoring_profile = sc_name\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 21,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"['NjQyNTI5ZmMtNmVkYS00Njg5LTk2ZDgtMjM3OTY4NTJkYzFj',\n",
" 'M2M0MGExZjAtMjhiZC00ZDkwLThmMTgtODNlN2Y2ZDVkMTMw',\n",
" 'ZmFhMDE1NzMtMjZjNS00MTFiLTk0MTEtNGRkYjgwYWQwOTI0']"
]
},
"execution_count": 16,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Adding same data with different last_update to show Scoring Profile effect\n",
"from datetime import datetime, timedelta\n",
"\n",
"today = datetime.utcnow().strftime('%Y-%m-%dT%H:%M:%S-00:00')\n",
"yesterday = (datetime.utcnow() - timedelta(days=1)).strftime('%Y-%m-%dT%H:%M:%S-00:00')\n",
"one_month_ago = (datetime.utcnow() - timedelta(days=30)).strftime('%Y-%m-%dT%H:%M:%S-00:00')\n",
"\n",
"vector_store.add_texts(\n",
" [\"Test 1\", \"Test 1\", \"Test 1\"],\n",
" [\n",
" {\"title\": \"Title 1\", \"source\": \"source1\", \"random\": \"10290\", \"last_update\": today},\n",
" {\"title\": \"Title 1\", \"source\": \"source1\", \"random\": \"48392\", \"last_update\": yesterday},\n",
" {\"title\": \"Title 1\", \"source\": \"source1\", \"random\": \"32893\", \"last_update\": one_month_ago},\n",
" ],\n",
")\n"
]
},
{
"cell_type": "code",
"execution_count": 23,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[Document(page_content='Test 1', metadata={'title': 'Title 1', 'source': 'source1', 'random': '10290', 'last_update': '2023-07-13T10:47:39-00:00'}),\n",
" Document(page_content='Test 1', metadata={'title': 'Title 1', 'source': 'source1', 'random': '48392', 'last_update': '2023-07-12T10:47:39-00:00'}),\n",
" Document(page_content='Test 1', metadata={'title': 'Title 1', 'source': 'source1', 'random': '32893', 'last_update': '2023-06-13T10:47:39-00:00'})]"
]
},
"execution_count": 17,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"res = vector_store.similarity_search(query=\"Test 1\", k=3, search_type=\"hybrid\")\n",
"res"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"display_name": "Python 3.9.13 ('.venv': venv)",
"language": "python",
"name": "python3"
},
@ -232,8 +575,9 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.6"
"version": "3.9.13"
},
"orig_nbformat": 4,
"vscode": {
"interpreter": {
"hash": "645053d6307d413a1a75681b5ebb6449bb2babba4bcb0bf65a1ddc3dbefb108a"
@ -241,5 +585,5 @@
}
},
"nbformat": 4,
"nbformat_minor": 4
"nbformat_minor": 2
}

@ -34,6 +34,12 @@ logger = logging.getLogger()
if TYPE_CHECKING:
from azure.search.documents import SearchClient
from azure.search.documents.indexes.models import (
ScoringProfile,
SearchField,
SemanticSettings,
VectorSearch,
)
# Allow overriding field names for Azure Search
@ -61,8 +67,13 @@ def _get_search_client(
endpoint: str,
key: str,
index_name: str,
embedding_function: Callable,
semantic_configuration_name: Optional[str] = None,
fields: Optional[List[SearchField]] = None,
vector_search: Optional[VectorSearch] = None,
semantic_settings: Optional[SemanticSettings] = None,
scoring_profiles: Optional[List[ScoringProfile]] = None,
default_scoring_profile: Optional[str] = None,
default_fields: Optional[List[SearchField]] = None,
) -> SearchClient:
from azure.core.credentials import AzureKeyCredential
from azure.core.exceptions import ResourceNotFoundError
@ -71,76 +82,70 @@ def _get_search_client(
from azure.search.documents.indexes import SearchIndexClient
from azure.search.documents.indexes.models import (
PrioritizedFields,
SearchableField,
SearchField,
SearchFieldDataType,
SearchIndex,
SemanticConfiguration,
SemanticField,
SemanticSettings,
SimpleField,
VectorSearch,
VectorSearchAlgorithmConfiguration,
)
default_fields = default_fields or []
if key is None:
credential = DefaultAzureCredential()
else:
credential = AzureKeyCredential(key)
index_client: SearchIndexClient = SearchIndexClient(
endpoint=endpoint, credential=credential
endpoint=endpoint, credential=credential, user_agent="langchain"
)
try:
index_client.get_index(name=index_name)
except ResourceNotFoundError:
# Fields configuration
fields = [
SimpleField(
name=FIELDS_ID,
type=SearchFieldDataType.String,
key=True,
filterable=True,
),
SearchableField(
name=FIELDS_CONTENT,
type=SearchFieldDataType.String,
searchable=True,
retrievable=True,
),
SearchField(
name=FIELDS_CONTENT_VECTOR,
type=SearchFieldDataType.Collection(SearchFieldDataType.Single),
searchable=True,
dimensions=len(embedding_function("Text")),
vector_search_configuration="default",
),
SearchableField(
name=FIELDS_METADATA,
type=SearchFieldDataType.String,
searchable=True,
retrievable=True,
),
]
# Vector search configuration
vector_search = VectorSearch(
algorithm_configurations=[
VectorSearchAlgorithmConfiguration(
name="default",
kind="hnsw",
hnsw_parameters={
"m": 4,
"efConstruction": 400,
"efSearch": 500,
"metric": "cosine",
},
if fields is not None:
# Check mandatory fields
fields_types = {f.name: f.type for f in fields}
mandatory_fields = {df.name: df.type for df in default_fields}
# Check for missing keys
missing_fields = {
key: mandatory_fields[key]
for key, value in set(mandatory_fields.items())
- set(fields_types.items())
}
if len(missing_fields) > 0:
fmt_err = lambda x: ( # noqa: E731
f"{x} current type: '{fields_types.get(x, 'MISSING')}'. It has to "
f"be '{mandatory_fields.get(x)}' or you can point to a different "
f"'{mandatory_fields.get(x)}' field name by using the env variable "
f"'AZURESEARCH_FIELDS_{x.upper()}'"
)
]
)
error = "\n".join([fmt_err(x) for x in missing_fields])
raise ValueError(
f"You need to specify at least the following fields "
f"{missing_fields} or provide alternative field names in the env "
f"variables.\n\n{error}"
)
else:
fields = default_fields
# Vector search configuration
if vector_search is None:
vector_search = VectorSearch(
algorithm_configurations=[
VectorSearchAlgorithmConfiguration(
name="default",
kind="hnsw",
hnsw_parameters={ # type: ignore
"m": 4,
"efConstruction": 400,
"efSearch": 500,
"metric": "cosine",
},
)
]
)
# Create the semantic settings with the configuration
semantic_settings = (
None
if semantic_configuration_name is None
else SemanticSettings(
if semantic_settings is None and semantic_configuration_name is not None:
semantic_settings = SemanticSettings(
configurations=[
SemanticConfiguration(
name=semantic_configuration_name,
@ -152,17 +157,23 @@ def _get_search_client(
)
]
)
)
# Create the search index with the semantic settings and vector search
index = SearchIndex(
name=index_name,
fields=fields,
vector_search=vector_search,
semantic_settings=semantic_settings,
scoring_profiles=scoring_profiles,
default_scoring_profile=default_scoring_profile,
)
index_client.create_index(index)
# Create the search client
return SearchClient(endpoint=endpoint, index_name=index_name, credential=credential)
return SearchClient(
endpoint=endpoint,
index_name=index_name,
credential=credential,
user_agent="langchain",
)
class AzureSearch(VectorStore):
@ -177,21 +188,62 @@ class AzureSearch(VectorStore):
search_type: str = "hybrid",
semantic_configuration_name: Optional[str] = None,
semantic_query_language: str = "en-us",
fields: Optional[List[SearchField]] = None,
vector_search: Optional[VectorSearch] = None,
semantic_settings: Optional[SemanticSettings] = None,
scoring_profiles: Optional[List[ScoringProfile]] = None,
default_scoring_profile: Optional[str] = None,
**kwargs: Any,
):
from azure.search.documents.indexes.models import (
SearchableField,
SearchField,
SearchFieldDataType,
SimpleField,
)
"""Initialize with necessary components."""
# Initialize base class
self.embedding_function = embedding_function
default_fields = [
SimpleField(
name=FIELDS_ID,
type=SearchFieldDataType.String,
key=True,
filterable=True,
),
SearchableField(
name=FIELDS_CONTENT,
type=SearchFieldDataType.String,
),
SearchField(
name=FIELDS_CONTENT_VECTOR,
type=SearchFieldDataType.Collection(SearchFieldDataType.Single),
searchable=True,
vector_search_dimensions=len(embedding_function("Text")),
vector_search_configuration="default",
),
SearchableField(
name=FIELDS_METADATA,
type=SearchFieldDataType.String,
),
]
self.client = _get_search_client(
azure_search_endpoint,
azure_search_key,
index_name,
embedding_function,
semantic_configuration_name,
semantic_configuration_name=semantic_configuration_name,
fields=fields,
vector_search=vector_search,
semantic_settings=semantic_settings,
scoring_profiles=scoring_profiles,
default_scoring_profile=default_scoring_profile,
default_fields=default_fields,
)
self.search_type = search_type
self.semantic_configuration_name = semantic_configuration_name
self.semantic_query_language = semantic_query_language
self.fields = fields if fields else default_fields
@property
def embeddings(self) -> Optional[Embeddings]:
@ -216,17 +268,24 @@ class AzureSearch(VectorStore):
key = base64.urlsafe_b64encode(bytes(key, "utf-8")).decode("ascii")
metadata = metadatas[i] if metadatas else {}
# Add data to index
data.append(
{
"@search.action": "upload",
FIELDS_ID: key,
FIELDS_CONTENT: text,
FIELDS_CONTENT_VECTOR: np.array(
self.embedding_function(text), dtype=np.float32
).tolist(),
FIELDS_METADATA: json.dumps(metadata),
# Additional metadata to fields mapping
if metadata:
additional_fields = {
k: v
for k, v in metadata.items()
if k in [x.name for x in self.fields]
}
)
doc = {
"@search.action": "upload",
FIELDS_ID: key,
FIELDS_CONTENT: text,
FIELDS_CONTENT_VECTOR: np.array(
self.embedding_function(text), dtype=np.float32
).tolist(),
FIELDS_METADATA: json.dumps(metadata),
}
doc.update(additional_fields)
data.append(doc)
ids.append(key)
# Upload data in batches
if len(data) == MAX_UPLOAD_BATCH_SIZE:
@ -291,18 +350,13 @@ class AzureSearch(VectorStore):
Returns:
List of Documents most similar to the query and score for each
"""
from azure.search.documents.models import Vector
results = self.client.search(
search_text="",
vector=Vector(
value=np.array(
self.embedding_function(query), dtype=np.float32
).tolist(),
k=k,
fields=FIELDS_CONTENT_VECTOR,
),
select=[f"{FIELDS_ID},{FIELDS_CONTENT},{FIELDS_METADATA}"],
vector=np.array(self.embedding_function(query), dtype=np.float32).tolist(),
top_k=k,
vector_fields=FIELDS_CONTENT_VECTOR,
select=[FIELDS_ID, FIELDS_CONTENT, FIELDS_METADATA],
filter=filters,
)
# Convert results to Document objects
@ -346,18 +400,13 @@ class AzureSearch(VectorStore):
Returns:
List of Documents most similar to the query and score for each
"""
from azure.search.documents.models import Vector
results = self.client.search(
search_text=query,
vector=Vector(
value=np.array(
self.embedding_function(query), dtype=np.float32
).tolist(),
k=k,
fields=FIELDS_CONTENT_VECTOR,
),
select=[f"{FIELDS_ID},{FIELDS_CONTENT},{FIELDS_METADATA}"],
vector=np.array(self.embedding_function(query), dtype=np.float32).tolist(),
top_k=k,
vector_fields=FIELDS_CONTENT_VECTOR,
select=[FIELDS_ID, FIELDS_CONTENT, FIELDS_METADATA],
filter=filters,
top=k,
)
@ -404,18 +453,12 @@ class AzureSearch(VectorStore):
Returns:
List of Documents most similar to the query and score for each
"""
from azure.search.documents.models import Vector
results = self.client.search(
search_text=query,
vector=Vector(
value=np.array(
self.embedding_function(query), dtype=np.float32
).tolist(),
k=50, # Hardcoded value to maximize L2 retrieval
fields=FIELDS_CONTENT_VECTOR,
),
select=[f"{FIELDS_ID},{FIELDS_CONTENT},{FIELDS_METADATA}"],
vector=np.array(self.embedding_function(query), dtype=np.float32).tolist(),
top_k=50, # Hardcoded value to maximize L2 retrieval
vector_fields=FIELDS_CONTENT_VECTOR,
select=[FIELDS_ID, FIELDS_CONTENT, FIELDS_METADATA],
filter=filters,
query_type="semantic",
query_language=self.semantic_query_language,
@ -425,8 +468,8 @@ class AzureSearch(VectorStore):
top=k,
)
# Get Semantic Answers
semantic_answers = results.get_answers()
semantic_answers_dict = {}
semantic_answers = results.get_answers() or []
semantic_answers_dict: Dict = {}
for semantic_answer in semantic_answers:
semantic_answers_dict[semantic_answer.key] = {
"text": semantic_answer.text,

@ -748,14 +748,14 @@ six = ">=1.12.0"
[[package]]
name = "azure-search-documents"
version = "11.4.0a20230509004"
version = "11.4.0b6"
description = "Microsoft Azure Cognitive Search Client Library for Python"
category = "main"
optional = true
python-versions = ">=3.7"
files = [
{file = "azure-search-documents-11.4.0a20230509004.zip", hash = "sha256:6cca144573161a10aa0fcd13927264453e79c63be6a53cf2ec241c9c8c22f6b5"},
{file = "azure_search_documents-11.4.0a20230509004-py3-none-any.whl", hash = "sha256:6215e9a4f9e935ff3eac1b7d5519c6c0789b4497eb11242d376911aaefbb0359"},
{file = "azure-search-documents-11.4.0b6.zip", hash = "sha256:c9ebd7d99d3c7b879f48acad66141e1f50eae4468cfb8389a4b25d4c620e8df1"},
{file = "azure_search_documents-11.4.0b6-py3-none-any.whl", hash = "sha256:24ff85bf2680c36b38d8092bcbbe2d90699aac7c4a228b0839c0ce595a41628c"},
]
[package.dependencies]
@ -763,11 +763,6 @@ azure-common = ">=1.1,<2.0"
azure-core = ">=1.24.0,<2.0.0"
isodate = ">=0.6.0"
[package.source]
type = "legacy"
url = "https://pkgs.dev.azure.com/azure-sdk/public/_packaging/azure-sdk-for-python/pypi/simple"
reference = "azure-sdk-dev"
[[package]]
name = "backcall"
version = "0.2.0"
@ -12552,4 +12547,4 @@ text-helpers = ["chardet"]
[metadata]
lock-version = "2.0"
python-versions = ">=3.8.1,<4.0"
content-hash = "4f5d91f450555bb3a039c3aef4a7996d1322f25608ec17a7b0c1ad92813d6a63"
content-hash = "dfd8a8fc0b896d75c92b268160bdd5bc87de1f997014c0f092fbc442b5c3f900"

@ -111,7 +111,7 @@ nebula3-python = {version = "^3.4.0", optional = true}
mwparserfromhell = {version = "^0.6.4", optional = true}
mwxml = {version = "^0.3.3", optional = true}
awadb = {version = "^0.3.3", optional = true}
azure-search-documents = {version = "11.4.0a20230509004", source = "azure-sdk-dev", optional = true}
azure-search-documents = {version = "11.4.0b6", optional = true}
esprima = {version = "^4.0.1", optional = true}
openllm = {version = ">=0.1.19", optional = true}
streamlit = {version = "^1.18.0", optional = true, python = ">=3.8.1,<3.9.7 || >3.9.7,<4.0"}
@ -358,11 +358,6 @@ extended_testing = [
"jinja2",
]
[[tool.poetry.source]]
name = "azure-sdk-dev"
url = "https://pkgs.dev.azure.com/azure-sdk/public/_packaging/azure-sdk-for-python/pypi/simple/"
secondary = true
[tool.ruff]
select = [
"E", # pycodestyle

Loading…
Cancel
Save