"For some engines, if a direct `answer` is available the warpper will print the answer instead of the full list of search results. You can use the `results` method of the wrapper if you want to obtain all the results."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"jukit_cell_id": "gGM9PVQX6m"
},
"source": [
"search.run(\"Who is the current president of the united states of america?\")"
],
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n"
]
},
{
"data": {
"text/plain": [
"'In all, 45 individuals have served 46 presidencies spanning 58 full four-year terms. Joe Biden is the 46th and current president of the United States, having assumed office on January 20, 2021.'"
"'Paris is the capital of France, the largest country of Europe with 550 000 km2 (65 millions inhabitants). Paris has 2.234 million inhabitants end 2011. She is the core of Ile de France region (12 million people).'"
]
},
"execution_count": 1,
@ -53,15 +70,8 @@
"output_type": "execute_result"
}
],
"execution_count": 1
},
{
"cell_type": "markdown",
"metadata": {
"jukit_cell_id": "jCSkIlQDUK"
},
"source": [
"For some engines, if a direct `answer` is available the warpper will print the answer instead of the full search results. You can use the `results` method of the wrapper if you want to obtain all the results."
"search.run(\"What is the capital of France\")"
]
},
{
@ -86,28 +96,33 @@
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"jukit_cell_id": "UTEdJ03LqA"
},
"outputs": [],
"source": [
"search = SearxSearchWrapper(searx_host=\"http://127.0.0.1:8888\", k=5) # k is for max number of items"
],
"outputs": [],
"execution_count": null
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {
"jukit_cell_id": "3FyQ6yHI8K"
},
"source": [
"search.run(\"large language model \", engines='wiki')"
],
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n"
]
},
{
"data": {
"text/plain": [
"'Large language models (LLMs) represent a major advancement in AI, with the promise of transforming domains through learned knowledge. LLM sizes have been increasing 10X every year for the last few years, and as these models grow in complexity and size, so do their capabilities.\\n\\nGPT-3 can translate language, write essays, generate computer code, and more \u2014 all with limited to no supervision. In July 2020, OpenAI unveiled GPT-3, a language model that was easily the largest known at the time. Put simply, GPT-3 is trained to predict the next word in a sentence, much like how a text message autocomplete feature works.\\n\\nAll of today\u2019s well-known language models\u2014e.g., GPT-3 from OpenAI, PaLM or LaMDA from Google, Galactica or OPT from Meta, Megatron-Turing from Nvidia/Microsoft, Jurassic-1 from AI21 Labs\u2014are...\\n\\nLarge language models are computer programs that open new possibilities of text understanding and generation in software systems. Consider this: ...\\n\\nLarge language models (LLMs) such as GPT-3are increasingly being used to generate text. These tools should be used with care, since they can generate content that is biased, non-verifiable, constitutes original research, or violates copyrights.'"
"'Large language models (LLMs) represent a major advancement in AI, with the promise of transforming domains through learned knowledge. LLM sizes have been increasing 10X every year for the last few years, and as these models grow in complexity and size, so do their capabilities.\\n\\nGPT-3 can translate language, write essays, generate computer code, and more — all with limited to no supervision. In July 2020, OpenAI unveiled GPT-3, a language model that was easily the largest known at the time. Put simply, GPT-3 is trained to predict the next word in a sentence, much like how a text message autocomplete feature works.\\n\\nA large language model, or LLM, is a deep learning algorithm that can recognize, summarize, translate, predict and generate text and other content based on knowledge gained from massive datasets. Large language models are among the most successful applications of transformer models.\\n\\nAll of today’s well-known language models—e.g., GPT-3 from OpenAI, PaLM or LaMDA from Google, Galactica or OPT from Meta, Megatron-Turing from Nvidia/Microsoft, Jurassic-1 from AI21 Labs—are...\\n\\nLarge language models (LLMs) such as GPT-3are increasingly being used to generate text. These tools should be used with care, since they can generate content that is biased, non-verifiable, constitutes original research, or violates copyrights.'"
]
},
"execution_count": 2,
@ -115,7 +130,48 @@
"output_type": "execute_result"
}
],
"execution_count": 2
"source": [
"search.run(\"large language model \", engines=['wiki'])"
]
},
{
"cell_type": "markdown",
"metadata": {
"jukit_cell_id": "SYz8nFkt81"
},
"source": [
"Passing other Searx parameters for searx like `language`"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {
"jukit_cell_id": "32rDh0Mvbx"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n"
]
},
{
"data": {
"text/plain": [
"'Aprendizaje profundo (en inglés, deep learning) es un conjunto de algoritmos de aprendizaje automático (en inglés, machine learning) que intenta modelar abstracciones de alto nivel en datos usando arquitecturas computacionales que admiten transformaciones no lineales múltiples e iterativas de datos expresados en forma matricial o tensorial. 1'"
"search.results(\"Large Language Model prompt\", num_results=5, categories='science', time_range='year')"
],
"outputs": [
{
"data": {
"text/plain": [
"[{'snippet': '\u2026 on natural language instructions, large language models (\u2026 the prompt used to steer the model, and most effective prompts \u2026 to prompt engineering, we propose Automatic Prompt \u2026',\n",
"name": "stdout",
"output_type": "stream",
"text": [
"[{'snippet': '… on natural language instructions, large language models (… the '\n",
" 'prompt used to steer the model, and most effective prompts … to '\n",
" 'prompt engineering, we propose Automatic Prompt …',\n",
" 'title': 'Large language models are human-level prompt engineers',\n",
" {'snippet': '\u2026 Large language models (LLMs) have introduced new possibilities for prototyping with AI [18]. Pre-trained on a large amount of text data, models \u2026 language instructions called prompts. \u2026',\n",
" 'title': 'Promptchainer: Chaining large language model prompts through visual programming',\n",
" {'snippet': '\u2026 can introspect the large prompt model. We derive the view \u03d50(X) and the model h0 from T01. However, instead of fully fine-tuning T0 during co-training, we focus on soft prompt tuning, \u2026',\n",
" 'title': 'Co-training improves prompt-based learning for large language models',\n",
" {'snippet': '\u2026 With the success of large language models (LLMs) of code and their use as \u2026 prompt design process become important. In this work, we propose a framework called Repo-Level Prompt \u2026',\n",
" 'title': 'Repository-level prompt generation for large language models of code',\n",
" {'snippet': '\u2026 Figure 2 | The benefits of different components of a prompt for the largest language model (Gopher), as estimated from hierarchical logistic regression. Each point estimates the unique \u2026',\n",
" 'link': 'https://arxiv.org/abs/2211.01910',\n",
" 'engines': ['google scholar'],\n",
" 'category': 'science'},\n",
" {'snippet': '… Large language models (LLMs) have introduced new possibilities '\n",
" 'for prototyping with AI [18]. Pre-trained on a large amount of '\n",
" 'text data, models … language instructions called prompts. …',\n",
" 'title': 'Promptchainer: Chaining large language model prompts through '\n",