* Make replit install all requirements first
This should install all requirements from requirements.txt. It makes this a one click experience, without the user having to run `pip install -r requirements.txt` and then tap the run button. I myself had to first run that command in my repl, so I have made this change so others don't have to do the same.
repl.it also runs on linux based systems, so `&&` is the correct bash syntax.
* Running in Bash
I applied the same change I made on onBoot to the run variable, and made the language bash as the syntax `./` and `&&` belong to bash.
Previously if a result element marked for collapsing didn't have a valid
"parent" element, the collapsing was skipped altogether. This loops
through child elements until a valid parent is found (or if one isn't
found, the element will not be collapsed).
On app init, short hashes are generated from file checksums to use for
cache busting. These hashes are added into the full file name and used
to symlink to the actual file contents. These symlinks are loaded in the
jinja templates for each page, and can tell the browser to load a new
file if the hash changes.
This is only in place for css and js files, but can be extended in the
future for other file types if needed.
Introduces a new config element and environment variable
(WHOOGLE_CONFIG_THEME) for setting the theme of the app. Rather than
just having either light or dark, this allows a user to have their
instance use their current system light/dark preference to determine the
theme to use.
As a result, the dark mode setting (and WHOOGLE_CONFIG_DARK) have been
deprecated, but will still work as expected until a system theme has
been chosen.
Sections such as "People also asked" and "related searches" typically
take up a lot of room on the results page, and don't always have the
most useful information. This checks for result elements with more than
7 child divs, extracts the section title, and wraps all elements in a
"details" element that can be expanded/collapsed by the user.
Note that this functionality existed previously (albeit not implemented
as well), but due to changes in how Google returns searches (switching
from using <h2> elements for section headers to <span> or <div>
elements), the approach to collapsing these sections needed to be
updated.
* Add support for Lingva translations in results
Searches that contain the word "translate" and are normal search queries
(i.e. not news/images/video/etc) now create an iframe to a Lingva url to
translate the user's search using their configured search language.
The Lingva url can be configured using the WHOOGLE_ALT_TL env var, or
will fall back to the official Lingva instance url (lingva.ml).
For more info, visit https://github.com/TheDavidDelta/lingva-translate
* Add basic test for lingva results
* Allow user specified lingva instances through csp frame-src
* Fix pep8 issue
A recent issue brought up a good point about how the latest changes to
setting default language to english break functionality for bilingual
users. The change was likely not the best solution for users who were
being affected by IP geolocation on their instances -- the right
solution for that would be to configure the interface/search language to
their preference instead.
The requests library requires both 'http' and 'https' values in any
included proxy dict, and whoogle was previously copying the http proxy
to https for simplicity. The assumption was that if the underlying
request wasn't able to connect via https, it would default to http
(otherwise why have the requirement to specify both?)
This led to connectivity issues for users with http only proxies as of
the latest urllib and requests package versions, which are a lot more
strict with connections over https. With the latest versions, if an
https connection cannot be made, the library returns an error.
As a result, the new proxy dict must look something like this for plain
http proxies:
{'http': 'http://domain.tld:port', 'https': 'http://domain.tld:port'}
where both http and https are identical, but both are still required.
Since the interface language defaults to IP geolocation by google, the
default language is now set to english. Still not sure if this is the
best solution, but at least temporarily should clear up some confusion
for users with instances deployed in countries outside of their own.
Also performed some minor cleanup:
- Updated name of strip_blocked_sites to clean_query
- Added clean_query to list of jinja template functions
- Ensured site block list doesn't contain duplicate filters
Occasionally the search results will contain links with arguments such
as 'dq', which was being erroneously used in attempts to extract the 'q'
element from query strings. This enforces that only links with '?q=' or
'&q=' (elements with a standalone 'q' arg) will have the element
extracted.
I also refactored the naming of this element once extracted to be just
'q'. Although this seems counterintuitive, it makes a little more sense
since this element is the one we're extracting. It's a vague url arg
name, but it is what it is.
Bump version to 0.5.2 for hotfix release
The new site filter breaks links to Maps results, so filter.py needed
to be updated to handle these links as a unique case. A new method was
introduced to easily remove any "-site:..." filters from the query,
which is now also used to format queries in the header template rather
than manually removing the blocked site list within the template itself.
Bumps version to 0.5.1 for releasing the bugfix
Fixes#329
* Replace hardcoded strings using translation json file
This introduces a new "translations.json" file under app/static/settings
that is loaded on app init and uses the user config value for interface
language to determine the appropriate strings to use in Whoogle-specific
elements of the UI (primarily only on the home page).
* Verify interface lang can be used for localization
Check the configured interface language against the available
localization dict before attempting to use, otherwise fall back to
english.
Also expanded language names in the languages json file.
* Add test for validating translation language keys
Also adds Spanish translation to json (the only non-English language I
can add and reasonably validate on my own).
* Validate all translations against original keyset, update readme
Readme has been updated to include basic contributing guidelines for
both code and translations.
* add view image option
* prevent whoogle links from opening in a new tab.
* remove view image template on mobile requests
* change loop values to be more robust to the number of images
* Update app/templates/imageresults.html
* fix "Basically the .cvifge class needs width: 100%; in order to expand the search input to fit the form width."
* Update app/templates/imageresults.html
* remove hardcoded string from template
* Add view image config var to app.json
* Add view image config var to whoogle.env
Co-authored-by: jacr13 <ramos.joao@protonmail.com>
Co-authored-by: Ben Busby <benbusby@protonmail.com>
The wget method seemed to have a possible issue with creating endless
index.html copies (despite being specified to output to console only),
so this has been updated to use curl instead.
Also uses new non-authenticated "healthz" route to perform the
healthcheck.
Fix#316Fix#313
The previous method of removing all site filters from the search query
removed the last letter of the search. This only applies the substring
filter if any site filters are present in the query.
Fixes#306