Compare commits

...

13 Commits

Author SHA1 Message Date
blob42 c73a145ba2 [searx-search] helper method to get cached results 1 year ago
blob42 734755e019 [searx-search] fix and update reference doc 1 year ago
blob42 94aafa3f55 [searx-search] linting 1 year ago
blob42 8edf20b570 [searx-search] update notebook examples 1 year ago
blob42 3a9fd229d9 [searx-search] better handling of results and API errors 1 year ago
blob42 d7eedc75d1 [searx-search] helper parameter for selecting engines 1 year ago
blob42 9d8f4fde67 [searx-search] fix setting language parameter 1 year ago
blob42 73ec695f9a [searx-search] move module under utilities
- Make the module loadable the same way as other utilities
1 year ago
blob42 c19fe2b678 [searx-search] fix docs, format, clean tests 1 year ago
blob42 a62b134e99 [searx-search] add docs, improved wrapper api, registered as tool
- Improved the search wrapper API to mirror the usage of the google
  search one.
- Register searx-search as loadable tool
- Added documentation and example notebook
1 year ago
blob42 a21e9becd4 [searx-search] better module and class names 1 year ago
blob42 6865fba689 [searx-search] Implement base results parser and helpers
- handle `answer` field when available
- mirror the google search tool usage
- limit the number of results
- implement a separate results() to return results with metadata
1 year ago
blob42 769ffc9149 [searx-search] query using base class and host address
- allow unverified https connections for private searx instances
1 year ago

@ -0,0 +1,35 @@
# SearxNG Search API
This page covers how to use the SearxNG search API within LangChain.
It is broken into two parts: installation and setup, and then references to the specific SearxNG API wrapper.
## Installation and Setup
- You can find a list of public SearxNG instances [here](https://searx.space/).
- It recommended to use a self-hosted instance to avoid abuse on the public instances. Also note that public instances often have a limit on the number of requests.
- To run a self-hosted instance see [this page](https://searxng.github.io/searxng/admin/installation.html) for more information.
- To use the tool you need to provide the searx host url by:
1. passing the named parameter `searx_host` when creating the instance.
2. exporting the environment variable `SEARXNG_HOST`.
## Wrappers
### Utility
You can use the wrapper to get results from a SearxNG instance.
```python
from langchain.utilities import SearxSearchWrapper
```
### Tool
You can also easily load this wrapper as a Tool (to use with an Agent).
You can do this with:
```python
from langchain.agents import load_tools
tools = load_tools(["searx-search"], searx_host="https://searx.example.com")
```
For more information on this, see [this page](../modules/agents/tools.md)

@ -119,3 +119,11 @@ Below is a list of all supported tools and relevant information:
- Requires LLM: No
- Extra Parameters: `google_api_key`, `google_cse_id`
- For more information on this, see [this page](../../ecosystem/google_search.md)
**searx-search**
- Tool Name: Search
- Tool Description: A wrapper around SearxNG meta search engine. Input should be a search query.
- Notes: SearxNG is easy to deploy self-hosted. It is a good privacy friendly alternative to Google Search. Uses the SearxNG API.
- Requires LLM: No
- Extra Parameters: `searx_host`

@ -0,0 +1,617 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"jukit_cell_id": "DUXgyWySl5"
},
"source": [
"# SearxNG Search API\n",
"\n",
"This notebook goes over how to use a self hosted SearxNG search API to search the web.\n",
"\n",
"You can [check this link](https://docs.searxng.org/dev/search_api.html) for more informations about Searx API parameters."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"jukit_cell_id": "OIHXztO2UT"
},
"outputs": [],
"source": [
"import pprint\n",
"from langchain.utilities import SearxSearchWrapper"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"jukit_cell_id": "4SzT9eDMjt"
},
"outputs": [],
"source": [
"search = SearxSearchWrapper(searx_host=\"http://127.0.0.1:8888\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"jukit_cell_id": "jCSkIlQDUK"
},
"source": [
"For some engines, if a direct `answer` is available the warpper will print the answer instead of the full list of search results. You can use the `results` method of the wrapper if you want to obtain all the results."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"jukit_cell_id": "gGM9PVQX6m"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n"
]
},
{
"data": {
"text/plain": [
"'Paris is the capital of France, the largest country of Europe with 550 000 km2 (65 millions inhabitants). Paris has 2.234 million inhabitants end 2011. She is the core of Ile de France region (12 million people).'"
]
},
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"search.run(\"What is the capital of France\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"jukit_cell_id": "OHyurqUPbS"
},
"source": [
"# Custom Parameters\n",
"\n",
"SearxNG supports up to [139 search engines](https://docs.searxng.org/admin/engines/configured_engines.html#configured-engines). You can also customize the Searx wrapper with arbitrary named parameters that will be passed to the Searx search API . In the below example we will making a more interesting use of custom search parameters from searx search api."
]
},
{
"cell_type": "markdown",
"metadata": {
"jukit_cell_id": "n1B2AyLKi4"
},
"source": [
"In this example we will be using the `engines` parameters to query wikipedia"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"jukit_cell_id": "UTEdJ03LqA"
},
"outputs": [],
"source": [
"search = SearxSearchWrapper(searx_host=\"http://127.0.0.1:8888\", k=5) # k is for max number of items"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {
"jukit_cell_id": "3FyQ6yHI8K"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n"
]
},
{
"data": {
"text/plain": [
"'Large language models (LLMs) represent a major advancement in AI, with the promise of transforming domains through learned knowledge. LLM sizes have been increasing 10X every year for the last few years, and as these models grow in complexity and size, so do their capabilities.\\n\\nGPT-3 can translate language, write essays, generate computer code, and more — all with limited to no supervision. In July 2020, OpenAI unveiled GPT-3, a language model that was easily the largest known at the time. Put simply, GPT-3 is trained to predict the next word in a sentence, much like how a text message autocomplete feature works.\\n\\nA large language model, or LLM, is a deep learning algorithm that can recognize, summarize, translate, predict and generate text and other content based on knowledge gained from massive datasets. Large language models are among the most successful applications of transformer models.\\n\\nAll of todays well-known language models—e.g., GPT-3 from OpenAI, PaLM or LaMDA from Google, Galactica or OPT from Meta, Megatron-Turing from Nvidia/Microsoft, Jurassic-1 from AI21 Labs—are...\\n\\nLarge language models (LLMs) such as GPT-3are increasingly being used to generate text. These tools should be used with care, since they can generate content that is biased, non-verifiable, constitutes original research, or violates copyrights.'"
]
},
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"search.run(\"large language model \", engines=['wiki'])"
]
},
{
"cell_type": "markdown",
"metadata": {
"jukit_cell_id": "SYz8nFkt81"
},
"source": [
"Passing other Searx parameters for searx like `language`"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {
"jukit_cell_id": "32rDh0Mvbx"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n"
]
},
{
"data": {
"text/plain": [
"'Aprendizaje profundo (en inglés, deep learning) es un conjunto de algoritmos de aprendizaje automático (en inglés, machine learning) que intenta modelar abstracciones de alto nivel en datos usando arquitecturas computacionales que admiten transformaciones no lineales múltiples e iterativas de datos expresados en forma matricial o tensorial. 1'"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"search = SearxSearchWrapper(searx_host=\"http://127.0.0.1:8888\", k=1)\n",
"search.run(\"deep learning\", language='es', engines=['wiki'])"
]
},
{
"cell_type": "markdown",
"metadata": {
"jukit_cell_id": "d0x164ssV1"
},
"source": [
"# Obtaining results with metadata"
]
},
{
"cell_type": "markdown",
"metadata": {
"jukit_cell_id": "pF6rs8XcDH"
},
"source": [
"In this example we will be looking for scientific paper using the `categories` parameter and limiting the results to a `time_range` (not all engines support the time range option).\n",
"\n",
"We also would like to obtain the results in a structured way including metadata. For this we will be using the `results` method of the wrapper."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"jukit_cell_id": "BFgpPH0sxF"
},
"outputs": [],
"source": [
"search = SearxSearchWrapper(searx_host=\"http://127.0.0.1:8888\")"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {
"jukit_cell_id": "r7qUtvKNOh"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[{'snippet': '… on natural language instructions, large language models (… the '\n",
" 'prompt used to steer the model, and most effective prompts … to '\n",
" 'prompt engineering, we propose Automatic Prompt …',\n",
" 'title': 'Large language models are human-level prompt engineers',\n",
" 'link': 'https://arxiv.org/abs/2211.01910',\n",
" 'engines': ['google scholar'],\n",
" 'category': 'science'},\n",
" {'snippet': '… Large language models (LLMs) have introduced new possibilities '\n",
" 'for prototyping with AI [18]. Pre-trained on a large amount of '\n",
" 'text data, models … language instructions called prompts. …',\n",
" 'title': 'Promptchainer: Chaining large language model prompts through '\n",
" 'visual programming',\n",
" 'link': 'https://dl.acm.org/doi/abs/10.1145/3491101.3519729',\n",
" 'engines': ['google scholar'],\n",
" 'category': 'science'},\n",
" {'snippet': '… can introspect the large prompt model. We derive the view '\n",
" 'ϕ0(X) and the model h0 from T01. However, instead of fully '\n",
" 'fine-tuning T0 during co-training, we focus on soft prompt '\n",
" 'tuning, …',\n",
" 'title': 'Co-training improves prompt-based learning for large language '\n",
" 'models',\n",
" 'link': 'https://proceedings.mlr.press/v162/lang22a.html',\n",
" 'engines': ['google scholar'],\n",
" 'category': 'science'},\n",
" {'snippet': '… With the success of large language models (LLMs) of code and '\n",
" 'their use as … prompt design process become important. In this '\n",
" 'work, we propose a framework called Repo-Level Prompt …',\n",
" 'title': 'Repository-level prompt generation for large language models of '\n",
" 'code',\n",
" 'link': 'https://arxiv.org/abs/2206.12839',\n",
" 'engines': ['google scholar'],\n",
" 'category': 'science'},\n",
" {'snippet': '… Figure 2 | The benefits of different components of a prompt '\n",
" 'for the largest language model (Gopher), as estimated from '\n",
" 'hierarchical logistic regression. Each point estimates the '\n",
" 'unique …',\n",
" 'title': 'Can language models learn from explanations in context?',\n",
" 'link': 'https://arxiv.org/abs/2204.02329',\n",
" 'engines': ['google scholar'],\n",
" 'category': 'science'}]\n"
]
}
],
"source": [
"results = search.results(\"Large Language Model prompt\", num_results=5, categories='science', time_range='year')\n",
"pprint.pp(results)"
]
},
{
"cell_type": "markdown",
"metadata": {
"jukit_cell_id": "2seI78pR8T"
},
"source": [
"Get papers from arxiv"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {
"jukit_cell_id": "JyNgoFm0vo"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[{'snippet': 'Thanks to the advanced improvement of large pre-trained language '\n",
" 'models, prompt-based fine-tuning is shown to be effective on a '\n",
" 'variety of downstream tasks. Though many prompting methods have '\n",
" 'been investigated, it remains unknown which type of prompts are '\n",
" 'the most effective among three types of prompts (i.e., '\n",
" 'human-designed prompts, schema prompts and null prompts). In '\n",
" 'this work, we empirically compare the three types of prompts '\n",
" 'under both few-shot and fully-supervised settings. Our '\n",
" 'experimental results show that schema prompts are the most '\n",
" 'effective in general. Besides, the performance gaps tend to '\n",
" 'diminish when the scale of training data grows large.',\n",
" 'title': 'Do Prompts Solve NLP Tasks Using Natural Language?',\n",
" 'link': 'http://arxiv.org/abs/2203.00902v1',\n",
" 'engines': ['arxiv'],\n",
" 'category': 'science'},\n",
" {'snippet': 'Cross-prompt automated essay scoring (AES) requires the system '\n",
" 'to use non target-prompt essays to award scores to a '\n",
" 'target-prompt essay. Since obtaining a large quantity of '\n",
" 'pre-graded essays to a particular prompt is often difficult and '\n",
" 'unrealistic, the task of cross-prompt AES is vital for the '\n",
" 'development of real-world AES systems, yet it remains an '\n",
" 'under-explored area of research. Models designed for '\n",
" 'prompt-specific AES rely heavily on prompt-specific knowledge '\n",
" 'and perform poorly in the cross-prompt setting, whereas current '\n",
" 'approaches to cross-prompt AES either require a certain quantity '\n",
" 'of labelled target-prompt essays or require a large quantity of '\n",
" 'unlabelled target-prompt essays to perform transfer learning in '\n",
" 'a multi-step manner. To address these issues, we introduce '\n",
" 'Prompt Agnostic Essay Scorer (PAES) for cross-prompt AES. Our '\n",
" 'method requires no access to labelled or unlabelled '\n",
" 'target-prompt data during training and is a single-stage '\n",
" 'approach. PAES is easy to apply in practice and achieves '\n",
" 'state-of-the-art performance on the Automated Student Assessment '\n",
" 'Prize (ASAP) dataset.',\n",
" 'title': 'Prompt Agnostic Essay Scorer: A Domain Generalization Approach to '\n",
" 'Cross-prompt Automated Essay Scoring',\n",
" 'link': 'http://arxiv.org/abs/2008.01441v1',\n",
" 'engines': ['arxiv'],\n",
" 'category': 'science'},\n",
" {'snippet': 'Research on prompting has shown excellent performance with '\n",
" 'little or even no supervised training across many tasks. '\n",
" 'However, prompting for machine translation is still '\n",
" 'under-explored in the literature. We fill this gap by offering a '\n",
" 'systematic study on prompting strategies for translation, '\n",
" 'examining various factors for prompt template and demonstration '\n",
" 'example selection. We further explore the use of monolingual '\n",
" 'data and the feasibility of cross-lingual, cross-domain, and '\n",
" 'sentence-to-document transfer learning in prompting. Extensive '\n",
" 'experiments with GLM-130B (Zeng et al., 2022) as the testbed '\n",
" 'show that 1) the number and the quality of prompt examples '\n",
" 'matter, where using suboptimal examples degenerates translation; '\n",
" '2) several features of prompt examples, such as semantic '\n",
" 'similarity, show significant Spearman correlation with their '\n",
" 'prompting performance; yet, none of the correlations are strong '\n",
" 'enough; 3) using pseudo parallel prompt examples constructed '\n",
" 'from monolingual data via zero-shot prompting could improve '\n",
" 'translation; and 4) improved performance is achievable by '\n",
" 'transferring knowledge from prompt examples selected in other '\n",
" 'settings. We finally provide an analysis on the model outputs '\n",
" 'and discuss several problems that prompting still suffers from.',\n",
" 'title': 'Prompting Large Language Model for Machine Translation: A Case '\n",
" 'Study',\n",
" 'link': 'http://arxiv.org/abs/2301.07069v2',\n",
" 'engines': ['arxiv'],\n",
" 'category': 'science'},\n",
" {'snippet': 'Large language models can perform new tasks in a zero-shot '\n",
" 'fashion, given natural language prompts that specify the desired '\n",
" 'behavior. Such prompts are typically hand engineered, but can '\n",
" 'also be learned with gradient-based methods from labeled data. '\n",
" 'However, it is underexplored what factors make the prompts '\n",
" 'effective, especially when the prompts are natural language. In '\n",
" 'this paper, we investigate common attributes shared by effective '\n",
" 'prompts. We first propose a human readable prompt tuning method '\n",
" '(F LUENT P ROMPT) based on Langevin dynamics that incorporates a '\n",
" 'fluency constraint to find a diverse distribution of effective '\n",
" 'and fluent prompts. Our analysis reveals that effective prompts '\n",
" 'are topically related to the task domain and calibrate the prior '\n",
" 'probability of label words. Based on these findings, we also '\n",
" 'propose a method for generating prompts using only unlabeled '\n",
" 'data, outperforming strong baselines by an average of 7.0% '\n",
" 'accuracy across three tasks.',\n",
" 'title': \"Toward Human Readable Prompt Tuning: Kubrick's The Shining is a \"\n",
" 'good movie, and a good prompt too?',\n",
" 'link': 'http://arxiv.org/abs/2212.10539v1',\n",
" 'engines': ['arxiv'],\n",
" 'category': 'science'},\n",
" {'snippet': 'Prevailing methods for mapping large generative language models '\n",
" \"to supervised tasks may fail to sufficiently probe models' novel \"\n",
" 'capabilities. Using GPT-3 as a case study, we show that 0-shot '\n",
" 'prompts can significantly outperform few-shot prompts. We '\n",
" 'suggest that the function of few-shot examples in these cases is '\n",
" 'better described as locating an already learned task rather than '\n",
" 'meta-learning. This analysis motivates rethinking the role of '\n",
" 'prompts in controlling and evaluating powerful language models. '\n",
" 'In this work, we discuss methods of prompt programming, '\n",
" 'emphasizing the usefulness of considering prompts through the '\n",
" 'lens of natural language. We explore techniques for exploiting '\n",
" 'the capacity of narratives and cultural anchors to encode '\n",
" 'nuanced intentions and techniques for encouraging deconstruction '\n",
" 'of a problem into components before producing a verdict. '\n",
" 'Informed by this more encompassing theory of prompt programming, '\n",
" 'we also introduce the idea of a metaprompt that seeds the model '\n",
" 'to generate its own natural language prompts for a range of '\n",
" 'tasks. Finally, we discuss how these more general methods of '\n",
" 'interacting with language models can be incorporated into '\n",
" 'existing and future benchmarks and practical applications.',\n",
" 'title': 'Prompt Programming for Large Language Models: Beyond the Few-Shot '\n",
" 'Paradigm',\n",
" 'link': 'http://arxiv.org/abs/2102.07350v1',\n",
" 'engines': ['arxiv'],\n",
" 'category': 'science'}]\n"
]
}
],
"source": [
"results = search.results(\"Large Language Model prompt\", num_results=5, engines=['arxiv'])\n",
"pprint.pp(results)"
]
},
{
"cell_type": "markdown",
"metadata": {
"jukit_cell_id": "LhEisLFcZM"
},
"source": [
"In this example we query for `large language models` under the `it` category. We then filter the results that come from github."
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {
"jukit_cell_id": "aATPfXzGzx"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[{'snippet': 'Guide to using pre-trained large language models of source code',\n",
" 'title': 'Code-LMs',\n",
" 'link': 'https://github.com/VHellendoorn/Code-LMs',\n",
" 'engines': ['github'],\n",
" 'category': 'it'},\n",
" {'snippet': 'Dramatron uses large language models to generate coherent '\n",
" 'scripts and screenplays.',\n",
" 'title': 'dramatron',\n",
" 'link': 'https://github.com/deepmind/dramatron',\n",
" 'engines': ['github'],\n",
" 'category': 'it'}]\n"
]
}
],
"source": [
"results = search.results(\"large language model\", num_results = 20, categories='it')\n",
"pprint.pp(list(filter(lambda r: r['engines'][0] == 'github', results)))"
]
},
{
"cell_type": "markdown",
"metadata": {
"jukit_cell_id": "zDo2YjafuU"
},
"source": [
"We could also directly query for results from `github` and other source forges."
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {
"jukit_cell_id": "5NrlredKxM"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[{'snippet': \"Implementation of 'A Watermark for Large Language Models' paper \"\n",
" 'by Kirchenbauer & Geiping et. al.',\n",
" 'title': 'Peutlefaire / LMWatermark',\n",
" 'link': 'https://gitlab.com/BrianPulfer/LMWatermark',\n",
" 'engines': ['gitlab'],\n",
" 'category': 'it'},\n",
" {'snippet': 'Guide to using pre-trained large language models of source code',\n",
" 'title': 'Code-LMs',\n",
" 'link': 'https://github.com/VHellendoorn/Code-LMs',\n",
" 'engines': ['github'],\n",
" 'category': 'it'},\n",
" {'snippet': '',\n",
" 'title': 'Simen Burud / Large-scale Language Models for Conversational '\n",
" 'Speech Recognition',\n",
" 'link': 'https://gitlab.com/BrianPulfer',\n",
" 'engines': ['gitlab'],\n",
" 'category': 'it'},\n",
" {'snippet': 'Dramatron uses large language models to generate coherent '\n",
" 'scripts and screenplays.',\n",
" 'title': 'dramatron',\n",
" 'link': 'https://github.com/deepmind/dramatron',\n",
" 'engines': ['github'],\n",
" 'category': 'it'},\n",
" {'snippet': 'Code for loralib, an implementation of \"LoRA: Low-Rank '\n",
" 'Adaptation of Large Language Models\"',\n",
" 'title': 'LoRA',\n",
" 'link': 'https://github.com/microsoft/LoRA',\n",
" 'engines': ['github'],\n",
" 'category': 'it'},\n",
" {'snippet': 'Code for the paper \"Evaluating Large Language Models Trained on '\n",
" 'Code\"',\n",
" 'title': 'human-eval',\n",
" 'link': 'https://github.com/openai/human-eval',\n",
" 'engines': ['github'],\n",
" 'category': 'it'},\n",
" {'snippet': 'A trend starts from \"Chain of Thought Prompting Elicits '\n",
" 'Reasoning in Large Language Models\".',\n",
" 'title': 'Chain-of-ThoughtsPapers',\n",
" 'link': 'https://github.com/Timothyxxx/Chain-of-ThoughtsPapers',\n",
" 'engines': ['github'],\n",
" 'category': 'it'},\n",
" {'snippet': 'Mistral: A strong, northwesterly wind: Framework for transparent '\n",
" 'and accessible large-scale language model training, built with '\n",
" 'Hugging Face 🤗 Transformers.',\n",
" 'title': 'mistral',\n",
" 'link': 'https://github.com/stanford-crfm/mistral',\n",
" 'engines': ['github'],\n",
" 'category': 'it'},\n",
" {'snippet': 'A prize for finding tasks that cause large language models to '\n",
" 'show inverse scaling',\n",
" 'title': 'prize',\n",
" 'link': 'https://github.com/inverse-scaling/prize',\n",
" 'engines': ['github'],\n",
" 'category': 'it'},\n",
" {'snippet': 'Optimus: the first large-scale pre-trained VAE language model',\n",
" 'title': 'Optimus',\n",
" 'link': 'https://github.com/ChunyuanLI/Optimus',\n",
" 'engines': ['github'],\n",
" 'category': 'it'},\n",
" {'snippet': 'Seminar on Large Language Models (COMP790-101 at UNC Chapel '\n",
" 'Hill, Fall 2022)',\n",
" 'title': 'llm-seminar',\n",
" 'link': 'https://github.com/craffel/llm-seminar',\n",
" 'engines': ['github'],\n",
" 'category': 'it'},\n",
" {'snippet': 'A central, open resource for data and tools related to '\n",
" 'chain-of-thought reasoning in large language models. Developed @ '\n",
" 'Samwald research group: https://samwald.info/',\n",
" 'title': 'ThoughtSource',\n",
" 'link': 'https://github.com/OpenBioLink/ThoughtSource',\n",
" 'engines': ['github'],\n",
" 'category': 'it'},\n",
" {'snippet': 'A comprehensive list of papers using large language/multi-modal '\n",
" 'models for Robotics/RL, including papers, codes, and related '\n",
" 'websites',\n",
" 'title': 'Awesome-LLM-Robotics',\n",
" 'link': 'https://github.com/GT-RIPL/Awesome-LLM-Robotics',\n",
" 'engines': ['github'],\n",
" 'category': 'it'},\n",
" {'snippet': 'Tools for curating biomedical training data for large-scale '\n",
" 'language modeling',\n",
" 'title': 'biomedical',\n",
" 'link': 'https://github.com/bigscience-workshop/biomedical',\n",
" 'engines': ['github'],\n",
" 'category': 'it'},\n",
" {'snippet': 'ChatGPT @ Home: Large Language Model (LLM) chatbot application, '\n",
" 'written by ChatGPT',\n",
" 'title': 'ChatGPT-at-Home',\n",
" 'link': 'https://github.com/Sentdex/ChatGPT-at-Home',\n",
" 'engines': ['github'],\n",
" 'category': 'it'},\n",
" {'snippet': 'Design and Deploy Large Language Model Apps',\n",
" 'title': 'dust',\n",
" 'link': 'https://github.com/dust-tt/dust',\n",
" 'engines': ['github'],\n",
" 'category': 'it'},\n",
" {'snippet': 'Polyglot: Large Language Models of Well-balanced Competence in '\n",
" 'Multi-languages',\n",
" 'title': 'polyglot',\n",
" 'link': 'https://github.com/EleutherAI/polyglot',\n",
" 'engines': ['github'],\n",
" 'category': 'it'},\n",
" {'snippet': 'Code release for \"Learning Video Representations from Large '\n",
" 'Language Models\"',\n",
" 'title': 'LaViLa',\n",
" 'link': 'https://github.com/facebookresearch/LaViLa',\n",
" 'engines': ['github'],\n",
" 'category': 'it'},\n",
" {'snippet': 'SmoothQuant: Accurate and Efficient Post-Training Quantization '\n",
" 'for Large Language Models',\n",
" 'title': 'smoothquant',\n",
" 'link': 'https://github.com/mit-han-lab/smoothquant',\n",
" 'engines': ['github'],\n",
" 'category': 'it'},\n",
" {'snippet': 'This repository contains the code, data, and models of the paper '\n",
" 'titled \"XL-Sum: Large-Scale Multilingual Abstractive '\n",
" 'Summarization for 44 Languages\" published in Findings of the '\n",
" 'Association for Computational Linguistics: ACL-IJCNLP 2021.',\n",
" 'title': 'xl-sum',\n",
" 'link': 'https://github.com/csebuetnlp/xl-sum',\n",
" 'engines': ['github'],\n",
" 'category': 'it'}]\n"
]
}
],
"source": [
"results = search.results(\"large language model\", num_results = 20, engines=['github', 'gitlab'])\n",
"pprint.pp(results)"
]
}
],
"metadata": {
"anaconda-cloud": {},
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.11"
}
},
"nbformat": 4,
"nbformat_minor": 4
}

@ -15,6 +15,8 @@ The utilities listed here are all generic utilities.
`SerpAPI <./examples/serpapi.html>`_: How to use the SerpAPI wrapper to search the web.
`SearxNG Search API <./examples/searx_search.html>`_: Hot to use the SearxNG meta search wrapper to search the web.
`Bing Search <./examples/bing_search.html>`_: How to use the Bing search wrapper to search the web.
`Wolfram Alpha <./examples/wolfram_alpha.html>`_: How to use the Wolfram Alpha wrapper to interact with Wolfram Alpha.

@ -36,3 +36,8 @@ This uses the official Google Search API to look up information on the web.
## SerpAPI
This uses SerpAPI, a third party search API engine, to interact with Google Search.
## Searx Search
This uses the Searx (SearxNG fork) meta search engine API to lookup information
on the web. It supports 139 search engines and is easy to self-host
which makes it a good choice for privacy-conscious users.

@ -0,0 +1,6 @@
SearxNG Search
=============================
.. automodule:: langchain.utilities.searx_search
:members:
:undoc-members:

@ -13,6 +13,7 @@ These can largely be grouped into two categories: generic utilities, and then ut
modules/python
modules/serpapi
modules/searx_search
.. toctree::

@ -33,6 +33,7 @@ from langchain.prompts import (
from langchain.serpapi import SerpAPIChain, SerpAPIWrapper
from langchain.sql_database import SQLDatabase
from langchain.utilities.google_search import GoogleSearchAPIWrapper
from langchain.utilities.searx_search import SearxSearchWrapper
from langchain.utilities.wolfram_alpha import WolframAlphaAPIWrapper
from langchain.vectorstores import FAISS, ElasticVectorSearch
@ -48,6 +49,7 @@ __all__ = [
"SelfAskWithSearchChain",
"SerpAPIWrapper",
"SerpAPIChain",
"SearxSearchWrapper",
"GoogleSearchAPIWrapper",
"WolframAlphaAPIWrapper",
"Anthropic",

@ -13,6 +13,7 @@ from langchain.requests import RequestsWrapper
from langchain.serpapi import SerpAPIWrapper
from langchain.utilities.bash import BashProcess
from langchain.utilities.google_search import GoogleSearchAPIWrapper
from langchain.utilities.searx_search import SearxSearchWrapper
from langchain.utilities.wolfram_alpha import WolframAlphaAPIWrapper
@ -140,14 +141,24 @@ def _get_serpapi(**kwargs: Any) -> Tool:
)
def _get_searx_search(**kwargs: Any) -> Tool:
return Tool(
name="Search",
description="A meta search engine. Useful for when you need to answer questions about current events. Input should be a search query.",
func=SearxSearchWrapper(**kwargs).run,
)
_EXTRA_LLM_TOOLS = {
"news-api": (_get_news_api, ["news_api_key"]),
"tmdb-api": (_get_tmdb_api, ["tmdb_bearer_token"]),
}
_EXTRA_OPTIONAL_TOOLS = {
"wolfram-alpha": (_get_wolfram_alpha, ["wolfram_alpha_appid"]),
"google-search": (_get_google_search, ["google_api_key", "google_cse_id"]),
"serpapi": (_get_serpapi, ["serpapi_api_key", "aiosession"]),
"searx-search": (_get_searx_search, ["searx_host", "searx_host"]),
}

@ -5,6 +5,7 @@ from langchain.serpapi import SerpAPIWrapper
from langchain.utilities.bash import BashProcess
from langchain.utilities.bing_search import BingSearchAPIWrapper
from langchain.utilities.google_search import GoogleSearchAPIWrapper
from langchain.utilities.searx_search import SearxSearchWrapper
from langchain.utilities.wolfram_alpha import WolframAlphaAPIWrapper
__all__ = [
@ -14,5 +15,6 @@ __all__ = [
"GoogleSearchAPIWrapper",
"WolframAlphaAPIWrapper",
"SerpAPIWrapper",
"SearxSearchWrapper",
"BingSearchAPIWrapper",
]

@ -0,0 +1,326 @@
"""Chain that calls SearxNG meta search API.
SearxNG is a privacy-friendly free metasearch engine that aggregates results from
multiple search engines and databases.
For the search API refer to https://docs.searxng.org/dev/search_api.html
Quick Start
-----------
In order to use this chain you need to provide the searx host. This can be done
by passing the named parameter :attr:`searx_host <SearxSearchWrapper.searx_host>`
or exporting the environment variable SEARX_HOST.
Note: this is the only required parameter.
Then create a searx search instance like this:
.. code-block:: python
from langchain.utilities import SearxSearchWrapper
# when the host starts with `http` SSL is disabled and the connection
# is assumed to be on a private network
searx_host='http://self.hosted'
search = SearxSearchWrapper(searx_host=searx_host)
You can now use the ``search`` instance to query the searx API.
Searching
---------
ref to the run method with a custom name
Use the :meth:`run() <SearxSearchWrapper.run>` and
:meth:`results() <SearxSearchWrapper.results>` methods to query the searx API.
Other methods are are available for convenience.
:class:`SearxResults` is a convenience wrapper around the raw json result.
Example usage of the ``run`` method to make a search:
.. code-block:: python
# using google and duckduckgo engines
s.run(query="what is the best search engine?")
Engine Parameters
-----------------
You can pass any `accepted searx search API
<https://docs.searxng.org/dev/search_api.html>`_ parameters to the
:py:class:`SearxSearchWrapper` instance.
In the following example we are using the
:attr:`engines <SearxSearchWrapper.engines>` and the ``language`` parameters:
.. code-block:: python
# assuming the searx host is set as above or exported as an env variable
s = SearxSearchWrapper(engines=['google', 'bing'],
language='es')
Search Tips
-----------
Searx offers a special
`search syntax <https://docs.searxng.org/user/index.html#search-syntax>`_
that can also be used instead of passing engine parameters.
For example the following query:
.. code-block:: python
s = SearxSearchWrapper("langchain library", engines=['github'])
# can also be written as:
s = SearxSearchWrapper("langchain library !github")
# or even:
s = SearxSearchWrapper("langchain library !gh")
See `SearxNG Configured Engines
<https://docs.searxng.org/admin/engines/configured_engines.html>`_ and
`SearxNG Search Syntax <https://docs.searxng.org/user/index.html#id1>`_
for more details.
Notes
-----
This wrapper is based on the SearxNG fork https://github.com/searxng/searxng which is
better maintained than the original Searx project and offers more features.
Public searxNG instances often use a rate limiter for API usage, so you might want to
use a self hosted instance and disable the rate limiter.
If you are self-hosting an instance you can customize the rate limiter for your
own network as described `here <https://github.com/searxng/searxng/pull/2129>`_.
For a list of public SearxNG instances see https://searx.space/
"""
import json
from typing import Any, Dict, List, Optional
import requests
from pydantic import BaseModel, Extra, Field, PrivateAttr, root_validator, validator
from langchain.utils import get_from_dict_or_env
def _get_default_params() -> dict:
return {"language": "en", "format": "json"}
class SearxResults(dict):
"""Dict like wrapper around search api results."""
_data = ""
def __init__(self, data: str):
"""Take a raw result from Searx and make it into a dict like object."""
json_data = json.loads(data)
super().__init__(json_data)
self.__dict__ = self
def __str__(self) -> str:
"""Text representation of searx result."""
return self._data
@property
def results(self) -> Any:
"""Silence mypy for accessing this field."""
return self.get("results")
@property
def answers(self) -> Any:
"""Accessor helper on the json result."""
return self.get("answers")
class SearxSearchWrapper(BaseModel):
"""Wrapper for Searx API.
To use you need to provide the searx host by passing the named parameter
``searx_host`` or exporting the environment variable ``SEARX_HOST``.
In some situations you might want to disable SSL verification, for example
if you are running searx locally. You can do this by passing the named parameter
``unsecure``. You can also pass the host url scheme as ``http`` to disable SSL.
Example:
.. code-block:: python
from langchain.utilities import SearxSearchWrapper
searx = SearxSearchWrapper(searx_host="https://searx.example.com")
Example with SSL disabled:
.. code-block:: python
from langchain.utilities import SearxSearchWrapper
# note the unsecure parameter is not needed if you pass the url scheme as
# http
searx = SearxSearchWrapper(searx_host="http://searx.example.com",
unsecure=True)
"""
_results: SearxResults = PrivateAttr()
searx_host: str = ""
unsecure: bool = False
params: dict = Field(default_factory=_get_default_params)
headers: Optional[dict] = None
engines: Optional[List[str]] = []
k: int = 10
@validator("unsecure")
def disable_ssl_warnings(cls, v: bool) -> bool:
"""Disable SSL warnings."""
if v:
# requests.urllib3.disable_warnings()
try:
import urllib3
urllib3.disable_warnings()
except ImportError as e:
print(e)
return v
@root_validator()
def validate_params(cls, values: Dict) -> Dict:
"""Validate that custom searx params are merged with default ones."""
user_params = values["params"]
default = _get_default_params()
values["params"] = {**default, **user_params}
engines = values.get("engines")
if engines:
values["params"]["engines"] = ",".join(engines)
searx_host = get_from_dict_or_env(values, "searx_host", "SEARX_HOST")
if not searx_host.startswith("http"):
print(
f"Warning: missing the url scheme on host \
! assuming secure https://{searx_host} "
)
searx_host = "https://" + searx_host
elif searx_host.startswith("http://"):
values["unsecure"] = True
cls.disable_ssl_warnings(True)
values["searx_host"] = searx_host
return values
class Config:
"""Configuration for this pydantic object."""
extra = Extra.forbid
def _searx_api_query(self, params: dict) -> SearxResults:
"""Actual request to searx API."""
raw_result = requests.get(
self.searx_host,
headers=self.headers,
params=params,
verify=not self.unsecure,
)
# test if http result is ok
if not raw_result.ok:
raise ValueError("Searx API returned an error: ", raw_result.text)
res = SearxResults(raw_result.text)
self._results = res
return res
def run(self, query: str, engines: List[str] = [], **kwargs: Any) -> str:
"""Run query through Searx API and parse results.
You can pass any other params to the searx query API.
Args:
query: The query to search for.
engines: List of engines to use for the query.
**kwargs: extra parameters to pass to the searx API.
Example:
This will make a query to the qwant engine:
.. code-block:: python
from langchain.utilities import SearxSearchWrapper
searx = SearxSearchWrapper(searx_host="http://my.searx.host")
searx.run("what is the weather in France ?", engine="qwant")
"""
_params = {
"q": query,
}
params = {**self.params, **_params, **kwargs}
if isinstance(engines, list) and len(engines) > 0:
params["engines"] = ",".join(engines)
res = self._searx_api_query(params)
if len(res.answers) > 0:
toret = res.answers[0]
# only return the content of the results list
elif len(res.results) > 0:
toret = "\n\n".join([r.get("content", "") for r in res.results[: self.k]])
else:
toret = "No good search result found"
return toret
def results(
self, query: str, num_results: int, engines: List[str] = [], **kwargs: Any
) -> List[Dict]:
"""Run query through Searx API and returns the results with metadata.
Args:
query: The query to search for.
num_results: Limit the number of results to return.
engines: List of engines to use for the query.
**kwargs: extra parameters to pass to the searx API.
Returns:
A list of dictionaries with the following keys:
snippet - The description of the result.
title - The title of the result.
link - The link to the result.
engines - The engines used for the result.
category - Searx category of the result.
"""
metadata_results = []
_params = {
"q": query,
}
params = {**self.params, **_params, **kwargs}
if isinstance(engines, list) and len(engines) > 0:
params["engines"] = ",".join(engines)
results = self._searx_api_query(params).results[:num_results]
if len(results) == 0:
return [{"Result": "No good Search Result was found"}]
for result in results:
metadata_result = {
"snippet": result.get("content", ""),
"title": result["title"],
"link": result["url"],
"engines": result["engines"],
"category": result["category"],
}
metadata_results.append(metadata_result)
return metadata_results
@property
def raw_results(self) -> SearxResults:
"""Cached searx results from the last query in a dict like object."""
return self._results
Loading…
Cancel
Save