Depending on the order in which the unit tests are executed, the python modules
of the engines are initialized (monkey patched) or not. As the order of the
tests is not static, random errors may occur.
To avaoid random `NameError: name 'logger' is not defined` in the unit tests of
the xpath engine, a logger is monkey patched into the xpath py-module.
```
make test.unit
TEST tests/unit
......EE...................
======================================================================
ERROR: test_response (tests.unit.engines.test_xpath.TestXpathEngine.test_response)
----------------------------------------------------------------------
Traceback (most recent call last):
File "./tests/unit/engines/test_xpath.py", line 60, in test_response
self.assertEqual(xpath.response(response), [])
^^^^^^^^^^^^^^^^^^^^^^^^
File "./searx/engines/xpath.py", line 309, in response
logger.debug("found %s results", len(results))
^^^^^^
NameError: name 'logger' is not defined
======================================================================
ERROR: test_response_results_xpath (tests.unit.engines.test_xpath.TestXpathEngine.test_response_results_xpath)
----------------------------------------------------------------------
Traceback (most recent call last):
File "./tests/unit/engines/test_xpath.py", line 102, in test_response_results_xpath
self.assertEqual(xpath.response(response), [])
^^^^^^^^^^^^^^^^^^^^^^^^
File "./searx/engines/xpath.py", line 309, in response
logger.debug("found %s results", len(results))
^^^^^^
NameError: name 'logger' is not defined
```
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
In the past, some files were tested with the standard profile, others with a
profile in which most of the messages were switched off ... some files were not
checked at all.
- ``PYLINT_SEARXNG_DISABLE_OPTION`` has been abolished
- the distinction ``# lint: pylint`` is no longer necessary
- the pylint tasks have been reduced from three to two
1. ./searx/engines -> lint engines with additional builtins
2. ./searx ./searxng_extra ./tests -> lint all other python files
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
use
from searx.engines.duckduckgo import _fetch_supported_languages, supported_languages_url # NOQA
so it is possible to easily remove all unused import using autoflake:
autoflake --in-place --recursive --remove-all-unused-imports searx tests
Xpath engine and results template changed to account for the fact that
archive.org doesn't cache .onions, though some onion engines migth have
their own cache.
Disabled by default. Can be enabled by setting the SOCKS proxies to
wherever Tor is listening and setting using_tor_proxy as True.
Requires Tor and updating packages.
To avoid manually adding the timeout on each engine, you can set
extra_proxy_timeout to account for Tor's (or whatever proxy used) extra
time.
A new "base" engine called command is introduced. It is the foundation for all command line engines for now.
You can use this engine to create your own command line engine.
Add some engines (commented out to make sure no one enables anything accidentally):
* git grep: This engine lets you grep in the searx repo.
* locate: If locate is installed and initialized, you can search on the FS.
* find: You can find files with a specific name from where you started searx.
* pattern search in files: This engine utilizes the command fgrep.
* regex search in files: This engine runs `grep` to find a file based on its contents.
This PR fixes the result count from bing which was throwing an (hidden) error and add a validation to avoid reading more results than avalaible.
For example :
If there is 100 results from some search and we try to get results from 120 to 130, Bing will send back the results from 0 to 10 and no error. If we compare results count with the first parameter of the request we can avoid this "invalid" results.
* Search URL is https://www.wikidata.org/w/index.php?{query}&ns0=1 (with ns0=1 at the end to avoid an HTTP redirection)
* url_detail: remove the disabletidy=1 deprecated parameter
* Add eval_xpath function: compile once for all xpath.
* Add get_id_cache: retrieve all HTML with an id, avoid the slow to procress dynamic xpath '//div[@id="{propertyid}"]'.replace('{propertyid}')
* Create an etree.HTMLParser() instead of using the global one (see #1575)
Fetch complete JSON data block, use legend to extract images.
Unquote urlencoded strings.
Add image description as 'content'.
Add 'img_format' and 'source' data (needs PR #1567 to enable this data to be displayed).
Show images which lack ownerid instead of discarding them.
use data from embedded JSON to improve results (e.g. real page title), add image format and source info (see PR #1567), improve paging logic (it now works)