Compare commits

..

No commits in common. 'main' and 'v0.8.0' have entirely different histories.
main ... v0.8.0

@ -42,10 +42,10 @@ jobs:
docker buildx ls
docker buildx build --push \
--tag benbusby/whoogle-search:latest \
--platform linux/amd64,linux/arm64 .
--platform linux/amd64,linux/arm/v7,linux/arm64 .
docker buildx build --push \
--tag ghcr.io/benbusby/whoogle-search:latest \
--platform linux/amd64,linux/arm64 .
--platform linux/amd64,linux/arm/v7,linux/arm64 .
- name: build and push tag
if: startsWith(github.ref, 'refs/tags')
run: |

8
.gitignore vendored

@ -1,5 +1,4 @@
venv/
.venv/
.idea/
__pycache__/
*.pyc
@ -11,8 +10,7 @@ test/static
flask_session/
app/static/config
app/static/custom_config
app/static/bangs/*
!app/static/bangs/00-whoogle.json
app/static/bangs
# pip stuff
/build/
@ -21,7 +19,3 @@ dist/
# env
whoogle.env
# vim
*~
*.swp

@ -1 +1,3 @@
entrypoint = "misc/replit.py"
language = "bash"
run = "killall -q python3 > /dev/null 2>&1; pip install -r requirements.txt && ./run"
onBoot = "killall -q python3 > /dev/null 2>&1; pip install -r requirements.txt && ./run"

@ -15,17 +15,10 @@ RUN pip install --prefix /install --no-warn-script-location --no-cache-dir -r re
FROM python:3.11.0a5-alpine
RUN apk add --update --no-cache tor curl openrc libstdc++
# git go //for obfs4proxy
# libcurl4-openssl-dev
RUN apk -U upgrade
# uncomment to build obfs4proxy
# RUN git clone https://gitlab.com/yawning/obfs4.git
# WORKDIR /obfs4
# RUN go build -o obfs4proxy/obfs4proxy ./obfs4proxy
# RUN cp ./obfs4proxy/obfs4proxy /usr/bin/obfs4proxy
ARG DOCKER_USER=whoogle
ARG DOCKER_USERID=927
ARG config_dir=/config
@ -45,6 +38,7 @@ ARG use_https=''
ARG whoogle_port=5000
ARG twitter_alt='farside.link/nitter'
ARG youtube_alt='farside.link/invidious'
ARG instagram_alt='farside.link/bibliogram/u'
ARG reddit_alt='farside.link/libreddit'
ARG medium_alt='farside.link/scribe'
ARG translate_alt='farside.link/lingva'
@ -66,6 +60,7 @@ ENV CONFIG_VOLUME=$config_dir \
EXPOSE_PORT=$whoogle_port \
WHOOGLE_ALT_TW=$twitter_alt \
WHOOGLE_ALT_YT=$youtube_alt \
WHOOGLE_ALT_IG=$instagram_alt \
WHOOGLE_ALT_RD=$reddit_alt \
WHOOGLE_ALT_MD=$medium_alt \
WHOOGLE_ALT_TL=$translate_alt \
@ -80,7 +75,8 @@ COPY --from=builder /install /usr/local
COPY misc/tor/torrc /etc/tor/torrc
COPY misc/tor/start-tor.sh misc/tor/start-tor.sh
COPY app/ app/
COPY run whoogle.env* ./
COPY run .
#COPY whoogle.env .
# Create user/group to run as
RUN adduser -D -g $DOCKER_USERID -u $DOCKER_USERID $DOCKER_USER

@ -2,5 +2,4 @@ graft app/static
graft app/templates
graft app/misc
include requirements.txt
recursive-include test
global-exclude *.pyc

@ -14,32 +14,29 @@
</tr>
</table>
Get Google search results, but without any ads, JavaScript, AMP links, cookies, or IP address tracking. Easily deployable in one click as a Docker app, and customizable with a single config file. Quick and simple to implement as a primary search engine replacement on both desktop and mobile.
Get Google search results, but without any ads, javascript, AMP links, cookies, or IP address tracking. Easily deployable in one click as a Docker app, and customizable with a single config file. Quick and simple to implement as a primary search engine replacement on both desktop and mobile.
Contents
1. [Features](#features)
3. [Install/Deploy Options](#install)
1. [Heroku Quick Deploy](#heroku-quick-deploy)
1. [Render.com](#render)
1. [Repl.it](#replit)
1. [Fly.io](#flyio)
1. [Koyeb](#koyeb)
1. [pipx](#pipx)
1. [pip](#pip)
1. [Manual](#manual)
1. [Docker](#manual-docker)
1. [Arch/AUR](#arch-linux--arch-based-distributions)
1. [Helm/Kubernetes](#helm-chart-for-kubernetes)
2. [Dependencies](#dependencies)
3. [Install/Deploy](#install)
1. [Heroku Quick Deploy](#a-heroku-quick-deploy)
2. [Repl.it](#b-replit)
3. [Fly.io](#c-flyio)
4. [pipx](#d-pipx)
5. [pip](#e-pip)
6. [Manual](#f-manual)
7. [Docker](#g-manual-docker)
8. [Arch/AUR](#arch-linux--arch-based-distributions)
9. [Helm/Kubernetes](#helm-chart-for-kubernetes)
4. [Environment Variables and Configuration](#environment-variables)
5. [Usage](#usage)
6. [Extra Steps](#extra-steps)
1. [Set Primary Search Engine](#set-whoogle-as-your-primary-search-engine)
2. [Custom Redirecting](#custom-redirecting)
2. [Custom Bangs](#custom-bangs)
3. [Prevent Downtime (Heroku Only)](#prevent-downtime-heroku-only)
4. [Manual HTTPS Enforcement](#https-enforcement)
5. [Using with Firefox Containers](#using-with-firefox-containers)
6. [Reverse Proxying](#reverse-proxying)
2. [Prevent Downtime (Heroku Only)](#prevent-downtime-heroku-only)
3. [Manual HTTPS Enforcement](#https-enforcement)
4. [Using with Firefox Containers](#using-with-firefox-containers)
5. [Reverse Proxying](#reverse-proxying)
1. [Nginx](#nginx)
7. [Contributing](#contributing)
8. [FAQ](#faq)
@ -62,7 +59,6 @@ Contents
- Randomly generated User Agent
- Easy to install/deploy
- DDG-style bang (i.e. `!<tag> <query>`) searches
- User-defined [custom bangs](#custom-bangs)
- Optional location-based searching (i.e. results near \<city\>)
- Optional NoJS mode to view search results in a separate window with JavaScript blocked
@ -72,12 +68,21 @@ Contents
<sup>***If deployed to a remote server, or configured to send requests through a VPN, Tor, proxy, etc.</sup>
## Dependencies
If using Heroku Quick Deploy, **you can skip this section**.
- Docker ([Windows](https://docs.docker.com/docker-for-windows/install/), [macOS](https://docs.docker.com/docker-for-mac/install/), [Ubuntu](https://docs.docker.com/engine/install/ubuntu/), [other Linux distros](https://docs.docker.com/engine/install/binaries/))
- Only needed if you intend on deploying the app as a Docker image
- [Python3](https://www.python.org/downloads/)
- `libcurl4-openssl-dev` and `libssl-dev`
- macOS: `brew install openssl curl-openssl`
- Ubuntu: `sudo apt-get install -y libcurl4-openssl-dev libssl-dev`
- Arch: `pacman -S curl openssl`
## Install
There are a few different ways to begin using the app, depending on your preferences:
___
### [Heroku Quick Deploy](https://heroku.com/about)
### A) [Heroku Quick Deploy](https://heroku.com/about)
[![Deploy](https://www.herokucdn.com/deploy/button.svg)](https://heroku.com/deploy?template=https://github.com/benbusby/whoogle-search/tree/main)
Provides:
@ -88,19 +93,7 @@ Notes:
- Requires a **PAID** Heroku Account.
- Sometimes has issues with auto-redirecting to `https`. Make sure to navigate to the `https` version of your app before adding as a default search engine.
___
### [Render](https://render.com)
Create an account on [render.com](https://render.com) and import the Whoogle repo with the following settings:
- Runtime: `Python 3`
- Build Command: `pip install -r requirements.txt`
- Run Command: `./run`
___
### [Repl.it](https://repl.it)
### B) [Repl.it](https://repl.it)
[![Run on Repl.it](https://repl.it/badge/github/benbusby/whoogle-search)](https://repl.it/github/benbusby/whoogle-search)
*Note: Requires a (free) Replit account*
@ -109,13 +102,11 @@ Provides:
- Free deployment of app
- Free HTTPS url (https://\<app name\>.\<username\>\.repl\.co)
- Supports custom domains
- Downtime after periods of inactivity ([solution](https://repl.it/talk/learn/How-to-use-and-setup-UptimeRobot/9003)\)
- Downtime after periods of inactivity \([solution 1](https://repl.it/talk/ask/use-this-pingmat1replco-just-enter/28821/101298), [solution 2](https://repl.it/talk/learn/How-to-use-and-setup-UptimeRobot/9003)\)
___
### C) [Fly.io](https://fly.io)
### [Fly.io](https://fly.io)
You will need a [Fly.io](https://fly.io) account to deploy Whoogle. The [free allowances](https://fly.io/docs/about/pricing/#free-allowances) are enough for personal use.
You will need a **PAID** [Fly.io](https://fly.io) account to deploy Whoogle.
#### Install the CLI: https://fly.io/docs/hands-on/installing/
@ -126,23 +117,9 @@ flyctl auth login
flyctl launch --image benbusby/whoogle-search:latest
```
The first deploy won't succeed because the default `internal_port` is wrong.
To fix this, open the generated `fly.toml` file, set `services.internal_port` to `5000` and run `flyctl launch` again.
Your app is now available at `https://<app-name>.fly.dev`.
___
### [Koyeb](https://www.koyeb.com)
Use one of the following guides to install Whoogle on Koyeb:
1. Using GitHub: https://www.koyeb.com/docs/quickstart/deploy-with-git
2. Using Docker: https://www.koyeb.com/docs/quickstart/deploy-a-docker-application
___
### [pipx](https://github.com/pipxproject/pipx#install-pipx)
### D) [pipx](https://github.com/pipxproject/pipx#install-pipx)
Persistent install:
`pipx install git+https://github.com/benbusby/whoogle-search.git`
@ -151,9 +128,7 @@ Sandboxed temporary instance:
`pipx run --spec git+https://github.com/benbusby/whoogle-search.git whoogle-search`
___
### pip
### E) pip
`pip install whoogle-search`
```bash
@ -180,21 +155,10 @@ optional arguments:
```
See the [available environment variables](#environment-variables) for additional configuration.
___
### Manual
### F) Manual
*Note: `Content-Security-Policy` headers can be sent by Whoogle if you set `WHOOGLE_CSP`.*
#### Dependencies
- [Python3](https://www.python.org/downloads/)
- `libcurl4-openssl-dev` and `libssl-dev`
- macOS: `brew install openssl curl-openssl`
- Ubuntu: `sudo apt-get install -y libcurl4-openssl-dev libssl-dev`
- Arch: `pacman -S curl openssl`
#### Install
Clone the repo and run the following commands to start the app in a local-only environment:
```bash
@ -228,6 +192,7 @@ Description=Whoogle
# with default values.
#Environment=WHOOGLE_ALT_TW=farside.link/nitter
#Environment=WHOOGLE_ALT_YT=farside.link/invidious
#Environment=WHOOGLE_ALT_IG=farside.link/bibliogram/u
#Environment=WHOOGLE_ALT_RD=farside.link/libreddit
#Environment=WHOOGLE_ALT_MD=farside.link/scribe
#Environment=WHOOGLE_ALT_TL=farside.link/lingva
@ -247,7 +212,6 @@ ExecStart=<python_install_dir>/python3 <whoogle_install_dir>/whoogle-search --ho
ExecStart=<whoogle_repo_dir>/run
# For example:
# ExecStart=/var/www/whoogle-search/run
WorkingDirectory=<whoogle_repo_dir>
ExecReload=/bin/kill -HUP $MAINPID
Restart=always
RestartSec=3
@ -308,9 +272,7 @@ There are two authentication methods, password and cookie. You will need to make
- `WHOOGLE_CONFIG_TOR=1`
- `WHOOGLE_TOR_USE_PASS=1`
___
### Manual (Docker)
### G) Manual (Docker)
1. Ensure the Docker daemon is running, and is accessible by your user account
- To add user permissions, you can execute `sudo usermod -aG docker yourusername`
- Running `docker ps` should return something besides an error. If you encounter an error saying the daemon isn't running, try `sudo systemctl start docker` (Linux) or ensure the docker tool is running (Windows/macOS).
@ -371,22 +333,16 @@ heroku open
This series of commands can take a while, but once you run it once, you shouldn't have to run it again. The final command, `heroku open` will launch a tab in your web browser, where you can test out Whoogle and even [set it as your primary search engine](https://github.com/benbusby/whoogle#set-whoogle-as-your-primary-search-engine).
You may also edit environment variables from your apps Settings tab in the Heroku Dashboard.
___
### Arch Linux & Arch-based Distributions
#### Arch Linux & Arch-based Distributions
There is an [AUR package available](https://aur.archlinux.org/packages/whoogle-git/), as well as a pre-built and daily updated package available at [Chaotic-AUR](https://chaotic.cx).
___
### Helm chart for Kubernetes
#### Helm chart for Kubernetes
To use the Kubernetes Helm Chart:
1. Ensure you have [Helm](https://helm.sh/docs/intro/install/) `>=3.0.0` installed
2. Clone this repository
3. Update [charts/whoogle/values.yaml](./charts/whoogle/values.yaml) as desired
4. Run `helm install whoogle ./charts/whoogle`
___
#### Using your own server, or alternative container deployment
There are other methods for deploying docker containers that are well outlined in [this article](https://rollout.io/blog/the-shortlist-of-docker-hosting/), but there are too many to describe set up for each here. Generally it should be about the same amount of effort as the Heroku deployment.
@ -413,19 +369,16 @@ There are a few optional environment variables available for customizing a Whoog
| WHOOGLE_PROXY_PASS | The password of the proxy server. |
| WHOOGLE_PROXY_TYPE | The type of the proxy server. Can be "socks5", "socks4", or "http". |
| WHOOGLE_PROXY_LOC | The location of the proxy server (host or ip). |
| WHOOGLE_USER_AGENT | The desktop user agent to use. Defaults to a randomly generated one. |
| WHOOGLE_USER_AGENT_MOBILE | The mobile user agent to use. Defaults to a randomly generated one. |
| WHOOGLE_USE_CLIENT_USER_AGENT | Enable to use your own user agent for all requests. Defaults to false. |
| WHOOGLE_REDIRECTS | Specify sites that should be redirected elsewhere. See [custom redirecting](#custom-redirecting). |
| EXPOSE_PORT | The port where Whoogle will be exposed. |
| HTTPS_ONLY | Enforce HTTPS. (See [here](https://github.com/benbusby/whoogle-search#https-enforcement)) |
| WHOOGLE_ALT_TW | The twitter.com alternative to use when site alternatives are enabled in the config. Set to "" to disable. |
| WHOOGLE_ALT_YT | The youtube.com alternative to use when site alternatives are enabled in the config. Set to "" to disable. |
| WHOOGLE_ALT_IG | The instagram.com alternative to use when site alternatives are enabled in the config. Set to "" to disable. |
| WHOOGLE_ALT_RD | The reddit.com alternative to use when site alternatives are enabled in the config. Set to "" to disable. |
| WHOOGLE_ALT_TL | The Google Translate alternative to use. This is used for all "translate ____" searches. Set to "" to disable. |
| WHOOGLE_ALT_MD | The medium.com alternative to use when site alternatives are enabled in the config. Set to "" to disable. |
| WHOOGLE_ALT_IMG | The imgur.com alternative to use when site alternatives are enabled in the config. Set to "" to disable. |
| WHOOGLE_ALT_WIKI | The wikipedia.org alternative to use when site alternatives are enabled in the config. Set to "" to disable. |
| WHOOGLE_ALT_WIKI | The wikipedia.com alternative to use when site alternatives are enabled in the config. Set to "" to disable. |
| WHOOGLE_ALT_IMDB | The imdb.com alternative to use when site alternatives are enabled in the config. Set to "" to disable. |
| WHOOGLE_ALT_QUORA | The quora.com alternative to use when site alternatives are enabled in the config. Set to "" to disable. |
| WHOOGLE_AUTOCOMPLETE | Controls visibility of autocomplete/search suggestions. Default on -- use '0' to disable. |
@ -435,8 +388,6 @@ There are a few optional environment variables available for customizing a Whoog
| WHOOGLE_TOR_SERVICE | Enable/disable the Tor service on startup. Default on -- use '0' to disable. |
| WHOOGLE_TOR_USE_PASS | Use password authentication for tor control port. |
| WHOOGLE_TOR_CONF | The absolute path to the config file containing the password for the tor control port. Default: ./misc/tor/control.conf WHOOGLE_TOR_PASS must be 1 for this to work.|
| WHOOGLE_SHOW_FAVICONS | Show/hide favicons next to search result URLs. Default on. |
| WHOOGLE_UPDATE_CHECK | Enable/disable the automatic daily check for new versions of Whoogle. Default on. |
### Config Environment Variables
These environment variables allow setting default config values, but can be overwritten manually by using the home page config menu. These allow a shortcut for destroying/rebuilding an instance to the same config state every time.
@ -448,8 +399,6 @@ These environment variables allow setting default config values, but can be over
| WHOOGLE_CONFIG_LANGUAGE | Set interface language |
| WHOOGLE_CONFIG_SEARCH_LANGUAGE | Set search result language |
| WHOOGLE_CONFIG_BLOCK | Block websites from search results (use comma-separated list) |
| WHOOGLE_CONFIG_BLOCK_TITLE | Block search result with a REGEX filter on title |
| WHOOGLE_CONFIG_BLOCK_URL | Block search result with a REGEX filter on URL |
| WHOOGLE_CONFIG_THEME | Set theme mode (light, dark, or system) |
| WHOOGLE_CONFIG_SAFE | Enable safe searches |
| WHOOGLE_CONFIG_ALTS | Use social media site alternatives (nitter, invidious, etc) |
@ -462,7 +411,6 @@ These environment variables allow setting default config values, but can be over
| WHOOGLE_CONFIG_STYLE | The custom CSS to use for styling (should be single line) |
| WHOOGLE_CONFIG_PREFERENCES_ENCRYPTED | Encrypt preferences token, requires preferences key |
| WHOOGLE_CONFIG_PREFERENCES_KEY | Key to encrypt preferences in URL (REQUIRED to show url) |
| WHOOGLE_CONFIG_ANON_VIEW | Include the "anonymous view" option for each search result |
## Usage
Same as most search engines, with the exception of filtering by time range.
@ -470,7 +418,6 @@ Same as most search engines, with the exception of filtering by time range.
To filter by a range of time, append ":past <time>" to the end of your search, where <time> can be `hour`, `day`, `month`, or `year`. Example: `coronavirus updates :past hour`
## Extra Steps
### Set Whoogle as your primary search engine
*Note: If you're using a reverse proxy to run Whoogle Search, make sure the "Root URL" config option on the home page is set to your URL before going through these steps.*
@ -515,40 +462,6 @@ Browser settings:
- Manual
- Under search engines > manage search engines > add, manually enter your Whoogle instance details with a `<whoogle url>/search?q=%s` formatted search URL.
### Custom Redirecting
You can set custom site redirects using the `WHOOGLE_REDIRECTS` environment
variable. A lot of sites, such as Twitter, Reddit, etc, have built-in redirects
to [Farside links](https://sr.ht/~benbusby/farside), but you may want to define
your own.
To do this, you can use the following syntax:
```
WHOOGLE_REDIRECTS="<parent_domain>:<new_domain>"
```
For example, if you want to redirect from "badsite.com" to "goodsite.com":
```
WHOOGLE_REDIRECTS="badsite.com:goodsite.com"
```
This can be used for multiple sites as well, with comma separation:
```
WHOOGLE_REDIRECTS="badA.com:goodA.com,badB.com:goodB.com"
```
NOTE: Do not include "http(s)://" when defining your redirect.
### Custom Bangs
You can create your own custom bangs. By default, bangs are stored in
`app/static/bangs`. See [`00-whoogle.json`](https://github.com/benbusby/whoogle-search/blob/main/app/static/bangs/00-whoogle.json)
for an example. These are parsed in alphabetical order with later files
overriding bangs set in earlier files, with the exception that DDG bangs
(downloaded to `app/static/bangs/bangs.json`) are always parsed first. Thus,
any custom bangs will always override the DDG ones.
### Prevent Downtime (Heroku only)
Part of the deal with Heroku's free tier is that you're allocated 550 hours/month (meaning it can't stay active 24/7), and the app is temporarily shut down after 30 minutes of inactivity. Once it becomes inactive, any Whoogle searches will still work, but it'll take an extra 10-15 seconds for the app to come back online before displaying the result, which can be frustrating if you're in a hurry.
@ -625,7 +538,7 @@ Under the hood, Whoogle is a basic Flask app with the following structure:
- `opensearch.xml`: A template used for supporting [OpenSearch](https://developer.mozilla.org/en-US/docs/Web/OpenSearch).
- `imageresults.html`: An "experimental" template used for supporting the "Full Size" image feature on desktop.
- `static/<css|js>`
- CSS/JavaScript files, should be self-explanatory
- CSS/Javascript files, should be self-explanatory
- `static/settings`
- Key-value JSON files for establishing valid configuration values
@ -664,7 +577,7 @@ I'm a huge fan of Searx though and encourage anyone to use that instead if they
**Why does the image results page look different?**
A lot of the app currently piggybacks on Google's existing support for fetching results pages with JavaScript disabled. To their credit, they've done an excellent job with styling pages, but it seems that the image results page - particularly on mobile - is a little rough. Moving forward, with enough interest, I'd like to transition to fetching the results and parsing them into a unique Whoogle-fied interface that I can style myself.
A lot of the app currently piggybacks on Google's existing support for fetching results pages with Javascript disabled. To their credit, they've done an excellent job with styling pages, but it seems that the image results page - particularly on mobile - is a little rough. Moving forward, with enough interest, I'd like to transition to fetching the results and parsing them into a unique Whoogle-fied interface that I can style myself.
## Public Instances
@ -678,21 +591,11 @@ A lot of the app currently piggybacks on Google's existing support for fetching
| [https://s.tokhmi.xyz](https://s.tokhmi.xyz) | 🇺🇸 US | Multi-choice | ✅ |
| [https://search.sethforprivacy.com](https://search.sethforprivacy.com) | 🇩🇪 DE | English | |
| [https://whoogle.dcs0.hu](https://whoogle.dcs0.hu) | 🇭🇺 HU | Multi-choice | |
| [https://whoogle.esmailelbob.xyz](https://whoogle.esmailelbob.xyz) | 🇨🇦 CA | Multi-choice | |
| [https://gowogle.voring.me](https://gowogle.voring.me) | 🇺🇸 US | Multi-choice | |
| [https://whoogle.privacydev.net](https://whoogle.privacydev.net) | 🇫🇷 FR | English | |
| [https://whoogle.privacydev.net](https://whoogle.privacydev.net) | 🇺🇸 US | Multi-choice | |
| [https://wg.vern.cc](https://wg.vern.cc) | 🇺🇸 US | English | |
| [https://whoogle.hxvy0.gq](https://whoogle.hxvy0.gq) | 🇨🇦 CA | Turkish Only | ✅ |
| [https://whoogle.hostux.net](https://whoogle.hostux.net) | 🇫🇷 FR | Multi-choice | |
| [https://whoogle.lunar.icu](https://whoogle.lunar.icu) | 🇩🇪 DE | Multi-choice | ✅ |
| [https://wgl.frail.duckdns.org](https://wgl.frail.duckdns.org) | 🇧🇷 BR | Multi-choice | |
| [https://whoogle.no-logs.com](https://whoogle.no-logs.com/) | 🇸🇪 SE | Multi-choice | |
| [https://whoogle.ftw.lol](https://whoogle.ftw.lol) | 🇩🇪 DE | Multi-choice | |
| [https://whoogle-search--replitcomreside.repl.co](https://whoogle-search--replitcomreside.repl.co) | 🇺🇸 US | English | |
| [https://search.notrustverify.ch](https://search.notrustverify.ch) | 🇨🇭 CH | Multi-choice | |
| [https://whoogle.datura.network](https://whoogle.datura.network) | 🇩🇪 DE | Multi-choice | |
| [https://whoogle.yepserver.xyz](https://whoogle.yepserver.xyz) | 🇺🇦 UA | Multi-choice | |
| [https://search.nezumi.party](https://search.nezumi.party) | 🇮🇹 IT | Multi-choice | |
| [https://search.snine.nl](https://search.snine.nl) | 🇳🇱 NL | Mult-choice | ✅ |
| [https://www.indexia.gq](https://www.indexia.gq) | 🇨🇦 CA | Multi-choice | ✅ |
* A checkmark in the "Cloudflare" category here refers to the use of the reverse proxy, [Cloudflare](https://cloudflare.com). The checkmark will not be listed for a site which uses Cloudflare DNS but rather the proxying service which grants Cloudflare the ability to monitor traffic to the website.
@ -704,8 +607,6 @@ A lot of the app currently piggybacks on Google's existing support for fetching
| [http://whoglqjdkgt2an4tdepberwqz3hk7tjo4kqgdnuj77rt7nshw2xqhqad.onion](http://whoglqjdkgt2an4tdepberwqz3hk7tjo4kqgdnuj77rt7nshw2xqhqad.onion) | 🇺🇸 US | Multi-choice
| [http://nuifgsnbb2mcyza74o7illtqmuaqbwu4flam3cdmsrnudwcmkqur37qd.onion](http://nuifgsnbb2mcyza74o7illtqmuaqbwu4flam3cdmsrnudwcmkqur37qd.onion) | 🇩🇪 DE | English
| [http://whoogle.vernccvbvyi5qhfzyqengccj7lkove6bjot2xhh5kajhwvidqafczrad.onion](http://whoogle.vernccvbvyi5qhfzyqengccj7lkove6bjot2xhh5kajhwvidqafczrad.onion/) | 🇺🇸 US | English |
| [http://whoogle.g4c3eya4clenolymqbpgwz3q3tawoxw56yhzk4vugqrl6dtu3ejvhjid.onion](http://whoogle.g4c3eya4clenolymqbpgwz3q3tawoxw56yhzk4vugqrl6dtu3ejvhjid.onion/) | 🇫🇷 FR | English |
| [http://whoogle.daturab6drmkhyeia4ch5gvfc2f3wgo6bhjrv3pz6n7kxmvoznlkq4yd.onion](http://whoogle.daturab6drmkhyeia4ch5gvfc2f3wgo6bhjrv3pz6n7kxmvoznlkq4yd.onion/) | 🇩🇪 DE | Multi-choice | |
#### I2P Instances

@ -60,6 +60,11 @@
"value": "farside.link/invidious",
"required": false
},
"WHOOGLE_ALT_IG": {
"description": "The site to use as a replacement for instagram.com when site alternatives are enabled in the config.",
"value": "farside.link/bibliogram/u",
"required": false
},
"WHOOGLE_ALT_RD": {
"description": "The site to use as a replacement for reddit.com when site alternatives are enabled in the config.",
"value": "farside.link/libreddit",
@ -105,11 +110,6 @@
"value": "",
"required": false
},
"WHOOGLE_CONFIG_TIME_PERIOD" : {
"description": "[CONFIG] The time period to use for restricting search results",
"value": "",
"required": false
},
"WHOOGLE_CONFIG_LANGUAGE": {
"description": "[CONFIG] The language to use for the interface (use values from https://raw.githubusercontent.com/benbusby/whoogle-search/develop/app/static/settings/languages.json)",
"value": "",

@ -1,24 +1,21 @@
from app.filter import clean_query
from app.request import send_tor_signal
from app.utils.session import generate_key
from app.utils.bangs import gen_bangs_json, load_all_bangs
from app.utils.session import generate_user_key
from app.utils.bangs import gen_bangs_json
from app.utils.misc import gen_file_hash, read_config_bool
from base64 import b64encode
from bs4 import MarkupResemblesLocatorWarning
from datetime import datetime, timedelta
from dotenv import load_dotenv
from flask import Flask
import json
import logging.config
import os
from stem import Signal
import threading
import warnings
from dotenv import load_dotenv
from werkzeug.middleware.proxy_fix import ProxyFix
from app.utils.misc import read_config_bool
from app.version import __version__
app = Flask(__name__, static_folder=os.path.dirname(
os.path.abspath(__file__)) + '/static')
@ -30,16 +27,16 @@ dot_env_path = (
'../whoogle.env'))
# Load .env file if enabled
if os.path.exists(dot_env_path):
if read_config_bool('WHOOGLE_DOTENV'):
load_dotenv(dot_env_path)
app.enc_key = generate_key()
app.default_key = generate_user_key()
if read_config_bool('HTTPS_ONLY'):
app.config['SESSION_COOKIE_NAME'] = '__Secure-session'
app.config['SESSION_COOKIE_SECURE'] = True
app.config['VERSION_NUMBER'] = __version__
app.config['VERSION_NUMBER'] = '0.8.0'
app.config['APP_ROOT'] = os.getenv(
'APP_ROOT',
os.path.dirname(os.path.abspath(__file__)))
@ -55,9 +52,6 @@ app.config['LANGUAGES'] = json.load(open(
app.config['COUNTRIES'] = json.load(open(
os.path.join(app.config['STATIC_FOLDER'], 'settings/countries.json'),
encoding='utf-8'))
app.config['TIME_PERIODS'] = json.load(open(
os.path.join(app.config['STATIC_FOLDER'], 'settings/time_periods.json'),
encoding='utf-8'))
app.config['TRANSLATIONS'] = json.load(open(
os.path.join(app.config['STATIC_FOLDER'], 'settings/translations.json'),
encoding='utf-8'))
@ -101,10 +95,7 @@ if not os.path.exists(app.config['BUILD_FOLDER']):
# Session values
app_key_path = os.path.join(app.config['CONFIG_PATH'], 'whoogle.key')
if os.path.exists(app_key_path):
try:
app.config['SECRET_KEY'] = open(app_key_path, 'r').read()
except PermissionError:
app.config['SECRET_KEY'] = str(b64encode(os.urandom(32)))
app.config['SECRET_KEY'] = open(app_key_path, 'r').read()
else:
app.config['SECRET_KEY'] = str(b64encode(os.urandom(32)))
with open(app_key_path, 'w') as key_file:
@ -142,9 +133,7 @@ app.config['CSP'] = 'default-src \'none\';' \
'connect-src \'self\';'
# Generate DDG bang filter
generating_bangs = False
if not os.path.exists(app.config['BANG_FILE']):
generating_bangs = True
json.dump({}, open(app.config['BANG_FILE'], 'w'))
bangs_thread = threading.Thread(
target=gen_bangs_json,
@ -181,16 +170,8 @@ app.jinja_env.globals.update(
# Attempt to acquire tor identity, to determine if Tor config is available
send_tor_signal(Signal.HEARTBEAT)
# Suppress spurious warnings from BeautifulSoup
warnings.simplefilter('ignore', MarkupResemblesLocatorWarning)
from app import routes # noqa
# The gen_bangs_json function takes care of loading bangs, so skip it here if
# it's already being loaded
if not generating_bangs:
load_all_bangs(app.config['BANG_FILE'])
# Disable logging from imported modules
logging.config.dictConfig({
'version': 1,

@ -3,7 +3,6 @@ from bs4 import BeautifulSoup
from bs4.element import ResultSet, Tag
from cryptography.fernet import Fernet
from flask import render_template
import html
import urllib.parse as urlparse
from urllib.parse import parse_qs
import re
@ -29,12 +28,9 @@ unsupported_g_pages = [
'google.com/preferences',
'google.com/intl',
'advanced_search',
'tbm=shop',
'ageverification.google.co.kr'
'tbm=shop'
]
unsupported_g_divs = ['google.com/preferences?hl=', 'ageverification.google.co.kr']
def extract_q(q_str: str, href: str) -> str:
"""Extracts the 'q' element from a result link. This is typically
@ -48,7 +44,7 @@ def extract_q(q_str: str, href: str) -> str:
Returns:
str: The 'q' element of the link, or an empty string
"""
return parse_qs(q_str, keep_blank_values=True)['q'][0] if ('&q=' in href or '?q=' in href) else ''
return parse_qs(q_str)['q'][0] if ('&q=' in href or '?q=' in href) else ''
def build_map_url(href: str) -> str:
@ -123,7 +119,6 @@ class Filter:
page_url='',
query='',
mobile=False) -> None:
self.soup = None
self.config = config
self.mobile = mobile
self.user_key = user_key
@ -154,141 +149,46 @@ class Filter:
return Fernet(self.user_key).encrypt(path.encode()).decode()
def clean(self, soup) -> BeautifulSoup:
self.soup = soup
self.main_divs = self.soup.find('div', {'id': 'main'})
self.main_divs = soup.find('div', {'id': 'main'})
self.remove_ads()
self.remove_block_titles()
self.remove_block_url()
self.collapse_sections()
self.update_css()
self.update_styling()
self.remove_block_tabs()
# self.main_divs is only populated for the main page of search results
# (i.e. not images/news/etc).
if self.main_divs:
for div in self.main_divs:
self.sanitize_div(div)
self.update_css(soup)
self.update_styling(soup)
self.remove_block_tabs(soup)
for img in [_ for _ in self.soup.find_all('img') if 'src' in _.attrs]:
for img in [_ for _ in soup.find_all('img') if 'src' in _.attrs]:
self.update_element_src(img, 'image/png')
for audio in [_ for _ in self.soup.find_all('audio') if 'src' in _.attrs]:
for audio in [_ for _ in soup.find_all('audio') if 'src' in _.attrs]:
self.update_element_src(audio, 'audio/mpeg')
audio['controls'] = ''
for link in self.soup.find_all('a', href=True):
for link in soup.find_all('a', href=True):
self.update_link(link)
self.add_favicon(link)
if self.config.alts:
self.site_alt_swap()
input_form = self.soup.find('form')
input_form = soup.find('form')
if input_form is not None:
input_form['method'] = 'GET' if self.config.get_only else 'POST'
# Use a relative URI for submissions
input_form['action'] = 'search'
# Ensure no extra scripts passed through
for script in self.soup('script'):
for script in soup('script'):
script.decompose()
# Update default footer and header
footer = self.soup.find('footer')
footer = soup.find('footer')
if footer:
# Remove divs that have multiple links beyond just page navigation
[_.decompose() for _ in footer.find_all('div', recursive=False)
if len(_.find_all('a', href=True)) > 3]
for link in footer.find_all('a', href=True):
link['href'] = f'{link["href"]}&preferences={self.config.preferences}'
header = self.soup.find('header')
header = soup.find('header')
if header:
header.decompose()
self.remove_site_blocks(self.soup)
return self.soup
def sanitize_div(self, div) -> None:
"""Removes escaped script and iframe tags from results
Returns:
None (The soup object is modified directly)
"""
if not div:
return
for d in div.find_all('div', recursive=True):
d_text = d.find(text=True, recursive=False)
# Ensure we're working with tags that contain text content
if not d_text or not d.string:
continue
d.string = html.unescape(d_text)
div_soup = BeautifulSoup(d.string, 'html.parser')
# Remove all valid script or iframe tags in the div
for script in div_soup.find_all('script'):
script.decompose()
for iframe in div_soup.find_all('iframe'):
iframe.decompose()
d.string = str(div_soup)
def add_favicon(self, link) -> None:
"""Adds icons for each returned result, using the result site's favicon
Returns:
None (The soup object is modified directly)
"""
# Skip empty, parentless, or internal links
show_favicons = read_config_bool('WHOOGLE_SHOW_FAVICONS', True)
is_valid_link = link and link.parent and link['href'].startswith('http')
if not show_favicons or not is_valid_link:
return
parent = link.parent
is_result_div = False
# Check each parent to make sure that the div doesn't already have a
# favicon attached, and that the div is a result div
while parent:
p_cls = parent.attrs.get('class') or []
if 'has-favicon' in p_cls or GClasses.scroller_class in p_cls:
return
elif GClasses.result_class_a not in p_cls:
parent = parent.parent
else:
is_result_div = True
break
if not is_result_div:
return
# Construct the html for inserting the icon into the parent div
parsed = urlparse.urlparse(link['href'])
favicon = self.encrypt_path(
f'{parsed.scheme}://{parsed.netloc}/favicon.ico',
is_element=True)
src = f'{self.root_url}/{Endpoint.element}?url={favicon}' + \
'&type=image/x-icon'
html = f'<img class="site-favicon" src="{src}">'
favicon = BeautifulSoup(html, 'html.parser')
link.parent.insert(0, favicon)
# Update all parents to indicate that a favicon has been attached
parent = link.parent
while parent:
p_cls = parent.get('class') or []
p_cls.append('has-favicon')
parent['class'] = p_cls
parent = parent.parent
if GClasses.result_class_a in p_cls:
break
self.remove_site_blocks(soup)
return soup
def remove_site_blocks(self, soup) -> None:
if not self.config.block or not soup.body:
@ -318,7 +218,7 @@ class Filter:
def remove_block_titles(self) -> None:
if not self.main_divs or not self.config.block_title:
return
block_title = re.compile(self.config.block_title)
block_title = re.compile(self.block_title)
for div in [_ for _ in self.main_divs.find_all('div', recursive=True)]:
block_divs = [_ for _ in div.find_all('h3', recursive=True)
if block_title.search(_.text) is not None]
@ -327,13 +227,13 @@ class Filter:
def remove_block_url(self) -> None:
if not self.main_divs or not self.config.block_url:
return
block_url = re.compile(self.config.block_url)
block_url = re.compile(self.block_url)
for div in [_ for _ in self.main_divs.find_all('div', recursive=True)]:
block_divs = [_ for _ in div.find_all('a', recursive=True)
if block_url.search(_.attrs['href']) is not None]
_ = div.decompose() if len(block_divs) else None
def remove_block_tabs(self) -> None:
def remove_block_tabs(self, soup) -> None:
if self.main_divs:
for div in self.main_divs.find_all(
'div',
@ -342,7 +242,7 @@ class Filter:
_ = div.decompose()
else:
# when in images tab
for div in self.soup.find_all(
for div in soup.find_all(
'div',
attrs={'class': f'{GClasses.images_tbm_tab}'}
):
@ -469,7 +369,7 @@ class Filter:
) + '&type=' + urlparse.quote(mime)
)
def update_css(self) -> None:
def update_css(self, soup) -> None:
"""Updates URLs used in inline styles to be proxied by Whoogle
using the /element endpoint.
@ -478,7 +378,7 @@ class Filter:
"""
# Filter all <style> tags
for style in self.soup.find_all('style'):
for style in soup.find_all('style'):
style.string = clean_css(style.string, self.page_url)
# TODO: Convert remote stylesheets to style tags and proxy all
@ -486,20 +386,20 @@ class Filter:
# for link in soup.find_all('link', attrs={'rel': 'stylesheet'}):
# print(link)
def update_styling(self) -> None:
def update_styling(self, soup) -> None:
# Update CSS classes for result divs
soup = GClasses.replace_css_classes(self.soup)
soup = GClasses.replace_css_classes(soup)
# Remove unnecessary button(s)
for button in self.soup.find_all('button'):
for button in soup.find_all('button'):
button.decompose()
# Remove svg logos
for svg in self.soup.find_all('svg'):
for svg in soup.find_all('svg'):
svg.decompose()
# Update logo
logo = self.soup.find('a', {'class': 'l'})
logo = soup.find('a', {'class': 'l'})
if logo and self.mobile:
logo['style'] = ('display:flex; justify-content:center; '
'align-items:center; color:#685e79; '
@ -507,15 +407,14 @@ class Filter:
# Fix search bar length on mobile
try:
search_bar = self.soup.find('header').find('form').find('div')
search_bar = soup.find('header').find('form').find('div')
search_bar['style'] = 'width: 100%;'
except AttributeError:
pass
# Fix body max width on images tab
style = self.soup.find('style')
div = self.soup.find('div', attrs={
'class': f'{GClasses.images_tbm_tab}'})
style = soup.find('style')
div = soup.find('div', attrs={'class': f'{GClasses.images_tbm_tab}'})
if style and div and not self.mobile:
css = style.string
css_html_tag = (
@ -545,6 +444,7 @@ class Filter:
"""
parsed_link = urlparse.urlparse(link['href'])
link_netloc = ''
if '/url?q=' in link['href']:
link_netloc = extract_q(parsed_link.query, link['href'])
else:
@ -554,12 +454,12 @@ class Filter:
if any(url in link_netloc for url in unsupported_g_pages):
# FIXME: The "Shopping" tab requires further filtering (see #136)
# Temporarily removing all links to that tab for now.
# Replaces the /url google unsupported link to the direct url
link['href'] = link_netloc
parent = link.parent
if any(divlink in link_netloc for divlink in unsupported_g_divs):
if 'google.com/preferences?hl=' in link_netloc:
# Handle case where a search is performed in a different
# language than what is configured. This usually returns a
# div with the same classes as normal search results, but with
@ -579,9 +479,7 @@ class Filter:
if parent.name == 'footer' or f'{GClasses.footer}' in p_cls:
link.decompose()
parent = parent.parent
if link.decomposed:
return
return
# Replace href with only the intended destination (no "utm" type tags)
href = link['href'].replace('https://www.google.com', '')
@ -645,54 +543,25 @@ class Filter:
):
link["target"] = "_blank"
def site_alt_swap(self) -> None:
"""Replaces link locations and page elements if "alts" config
is enabled
"""
for site, alt in SITE_ALTS.items():
if site != "medium.com" and alt != "":
# Ignore medium.com replacements since these are handled
# specifically in the link description replacement, and medium
# results are never given their own "card" result where this
# replacement would make sense.
# Also ignore if the alt is empty, since this is used to indicate
# that the alt is not enabled.
for div in self.soup.find_all('div', text=re.compile(site)):
# Use the number of words in the div string to determine if the
# string is a result description (shouldn't replace domains used
# in desc text).
if len(div.string.split(' ')) == 1:
div.string = div.string.replace(site, alt)
for link in self.soup.find_all('a', href=True):
# Search and replace all link descriptions
# with alternative location
link['href'] = get_site_alt(link['href'])
link_desc = link.find_all(
text=re.compile('|'.join(SITE_ALTS.keys())))
if len(link_desc) == 0:
continue
# Replace link location if "alts" config is enabled
if self.config.alts:
# Search and replace all link descriptions
# with alternative location
link['href'] = get_site_alt(link['href'])
link_desc = link.find_all(
text=re.compile('|'.join(SITE_ALTS.keys())))
if len(link_desc) == 0:
return
# Replace link description
link_desc = link_desc[0]
# Replace link description
link_desc = link_desc[0]
for site, alt in SITE_ALTS.items():
if site not in link_desc or not alt:
continue
new_desc = BeautifulSoup(features='html.parser').new_tag('div')
link_str = str(link_desc)
# Medium links should be handled differently, since 'medium.com'
# is a common substring of domain names, but shouldn't be
# replaced (i.e. 'philomedium.com' should stay as it is).
if 'medium.com' in link_str:
if link_str.startswith('medium.com') or '.medium.com' in link_str:
link_str = SITE_ALTS['medium.com'] + link_str[
link_str.find('medium.com') + len('medium.com'):]
new_desc.string = link_str
else:
new_desc.string = link_str.replace(site, alt)
new_desc.string = str(link_desc).replace(site, alt)
link_desc.replace_with(new_desc)
break
def view_image(self, soup) -> BeautifulSoup:
"""Replaces the soup with a new one that handles mobile results and
@ -707,15 +576,13 @@ class Filter:
# get some tags that are unchanged between mobile and pc versions
cor_suggested = soup.find_all('table', attrs={'class': "By0U9"})
next_pages = soup.find('table', attrs={'class': "uZgmoc"})
next_pages = soup.find_all('table', attrs={'class': "uZgmoc"})[0]
results = []
# find results div
results_div = soup.find('div', attrs={'class': "nQvrDb"})
# find all the results (if any)
results_all = []
if results_div:
results_all = results_div.find_all('div', attrs={'class': "lIMUZd"})
results_div = soup.find_all('div', attrs={'class': "nQvrDb"})[0]
# find all the results
results_all = results_div.find_all('div', attrs={'class': "lIMUZd"})
for item in results_all:
urls = item.find('a')['href'].split('&imgrefurl=')

@ -1,5 +1,4 @@
from inspect import Attribute
from typing import Optional
from app.utils.misc import read_config_bool
from flask import current_app
import os
@ -9,31 +8,6 @@ import pickle
from cryptography.fernet import Fernet
import hashlib
import brotli
import logging
import cssutils
from cssutils.css.cssstylesheet import CSSStyleSheet
from cssutils.css.cssstylerule import CSSStyleRule
# removes warnings from cssutils
cssutils.log.setLevel(logging.CRITICAL)
def get_rule_for_selector(stylesheet: CSSStyleSheet,
selector: str) -> Optional[CSSStyleRule]:
"""Search for a rule that matches a given selector in a stylesheet.
Args:
stylesheet (CSSStyleSheet) -- the stylesheet to search
selector (str) -- the selector to search for
Returns:
Optional[CSSStyleRule] -- the rule that matches the selector or None
"""
for rule in stylesheet.cssRules:
if hasattr(rule, "selectorText") and selector == rule.selectorText:
return rule
return None
class Config:
@ -42,13 +16,14 @@ class Config:
self.url = os.getenv('WHOOGLE_CONFIG_URL', '')
self.lang_search = os.getenv('WHOOGLE_CONFIG_SEARCH_LANGUAGE', '')
self.lang_interface = os.getenv('WHOOGLE_CONFIG_LANGUAGE', '')
self.style_modified = os.getenv(
'WHOOGLE_CONFIG_STYLE', '')
self.style = os.getenv(
'WHOOGLE_CONFIG_STYLE',
open(os.path.join(app_config['STATIC_FOLDER'],
'css/variables.css')).read())
self.block = os.getenv('WHOOGLE_CONFIG_BLOCK', '')
self.block_title = os.getenv('WHOOGLE_CONFIG_BLOCK_TITLE', '')
self.block_url = os.getenv('WHOOGLE_CONFIG_BLOCK_URL', '')
self.country = os.getenv('WHOOGLE_CONFIG_COUNTRY', '')
self.tbs = os.getenv('WHOOGLE_CONFIG_TIME_PERIOD', '')
self.theme = os.getenv('WHOOGLE_CONFIG_THEME', 'system')
self.safe = read_config_bool('WHOOGLE_CONFIG_SAFE')
self.dark = read_config_bool('WHOOGLE_CONFIG_DARK') # deprecated
@ -77,8 +52,7 @@ class Config:
'safe',
'nojs',
'anon_view',
'preferences_encrypted',
'tbs'
'preferences_encrypted'
]
# Skip setting custom config if there isn't one
@ -112,33 +86,6 @@ class Config:
if not name.startswith("__")
and (type(attr) is bool or type(attr) is str)}
@property
def style(self) -> str:
"""Returns the default style updated with specified modifications.
Returns:
str -- the new style
"""
style_sheet = cssutils.parseString(
open(os.path.join(current_app.config['STATIC_FOLDER'],
'css/variables.css')).read()
)
modified_sheet = cssutils.parseString(self.style_modified)
for rule in modified_sheet:
rule_default = get_rule_for_selector(style_sheet,
rule.selectorText)
# if modified rule is in default stylesheet, update it
if rule_default is not None:
# TODO: update this in a smarter way to handle :root better
# for now if we change a varialbe in :root all other default
# variables need to be also present
rule_default.style = rule.style
# else add the new rule to the default stylesheet
else:
style_sheet.add(rule)
return str(style_sheet.cssText, 'utf-8')
@property
def preferences(self) -> str:
# if encryption key is not set will uncheck preferences encryption
@ -254,8 +201,7 @@ class Config:
key = self._get_fernet_key(self.preferences_key)
config = Fernet(key).decrypt(
brotli.decompress(urlsafe_b64decode(
preferences.encode() + b'=='))
brotli.decompress(urlsafe_b64decode(preferences.encode()))
)
config = pickle.loads(brotli.decompress(config))
@ -263,8 +209,7 @@ class Config:
config = {}
elif mode == 'u': # preferences are not encrypted
config = pickle.loads(
brotli.decompress(urlsafe_b64decode(
preferences.encode() + b'=='))
brotli.decompress(urlsafe_b64decode(preferences.encode()))
)
else: # preferences are incorrectly formatted
config = {}

@ -14,7 +14,6 @@ class GClasses:
footer = 'TuS8Ad'
result_class_a = 'ZINbbc'
result_class_b = 'luh4td'
scroller_class = 'idg8be'
result_classes = {
result_class_a: ['Gx5Zad'],

@ -73,14 +73,6 @@ def send_tor_signal(signal: Signal) -> bool:
def gen_user_agent(is_mobile) -> str:
user_agent = os.environ.get('WHOOGLE_USER_AGENT', '')
user_agent_mobile = os.environ.get('WHOOGLE_USER_AGENT_MOBILE', '')
if user_agent and not is_mobile:
return user_agent
if user_agent_mobile and is_mobile:
return user_agent_mobile
firefox = random.choice(['Choir', 'Squier', 'Higher', 'Wire']) + 'fox'
linux = random.choice(['Win', 'Sin', 'Gin', 'Fin', 'Kin']) + 'ux'
@ -99,8 +91,8 @@ def gen_query(query, args, config) -> str:
if ':past' in query and 'tbs' not in args:
time_range = str.strip(query.split(':past', 1)[-1])
param_dict['tbs'] = '&tbs=' + ('qdr:' + str.lower(time_range[0]))
elif 'tbs' in args or 'tbs' in config:
result_tbs = args.get('tbs') if 'tbs' in args else config['tbs']
elif 'tbs' in args:
result_tbs = args.get('tbs')
param_dict['tbs'] = '&tbs=' + result_tbs
# Occasionally the 'tbs' param provided by google also contains a
@ -217,13 +209,19 @@ class Request:
proxy_pass = os.environ.get('WHOOGLE_PROXY_PASS', '')
auth_str = ''
if proxy_user:
auth_str = f'{proxy_user}:{proxy_pass}@'
proxy_str = f'{proxy_type}://{auth_str}{proxy_path}'
auth_str = proxy_user + ':' + proxy_pass
self.proxies = {
'https': proxy_str,
'http': proxy_str
'https': proxy_type + '://' +
((auth_str + '@') if auth_str else '') + proxy_path,
}
# Need to ensure both HTTP and HTTPS are in the proxy dict,
# regardless of underlying protocol
if proxy_type == 'https':
self.proxies['http'] = self.proxies['https'].replace(
'https', 'http')
else:
self.proxies['http'] = self.proxies['https']
else:
self.proxies = {
'http': 'socks5://127.0.0.1:9050',
@ -269,7 +267,7 @@ class Request:
return []
def send(self, base_url='', query='', attempt=0,
force_mobile=False, user_agent='') -> Response:
force_mobile=False) -> Response:
"""Sends an outbound request to a URL. Optionally sends the request
using Tor, if enabled by the user.
@ -285,14 +283,10 @@ class Request:
Response: The Response object returned by the requests call
"""
use_client_user_agent = int(os.environ.get('WHOOGLE_USE_CLIENT_USER_AGENT', '0'))
if user_agent and use_client_user_agent == 1:
modified_user_agent = user_agent
if force_mobile and not self.mobile:
modified_user_agent = self.modified_user_agent_mobile
else:
if force_mobile and not self.mobile:
modified_user_agent = self.modified_user_agent_mobile
else:
modified_user_agent = self.modified_user_agent
modified_user_agent = self.modified_user_agent
headers = {
'User-Agent': modified_user_agent
@ -307,8 +301,9 @@ class Request:
# view is suppressed correctly
now = datetime.now()
cookies = {
'CONSENT': 'PENDING+987',
'SOCS': 'CAESHAgBEhIaAB',
'CONSENT': 'YES+cb.{:d}{:02d}{:02d}-17-p0.de+F+678'.format(
now.year, now.month, now.day
)
}
# Validate Tor conn and request new identity if the last one failed

@ -4,12 +4,8 @@ import io
import json
import os
import pickle
import re
import urllib.parse as urlparse
import uuid
import validators
import sys
import traceback
from datetime import datetime, timedelta
from functools import wraps
@ -18,17 +14,15 @@ from app import app
from app.models.config import Config
from app.models.endpoint import Endpoint
from app.request import Request, TorError
from app.utils.bangs import suggest_bang, resolve_bang
from app.utils.misc import empty_gif, placeholder_img, get_proxy_host_url, \
fetch_favicon
from app.utils.bangs import resolve_bang
from app.utils.misc import get_proxy_host_url
from app.filter import Filter
from app.utils.misc import read_config_bool, get_client_ip, get_request_url, \
check_for_update, encrypt_string
from app.utils.widgets import *
from app.utils.results import bold_search_terms,\
check_for_update
from app.utils.results import add_ip_card, bold_search_terms,\
add_currency_card, check_currency, get_tabs_content
from app.utils.search import Search, needs_https, has_captcha
from app.utils.session import valid_user_session
from app.utils.session import generate_user_key, valid_user_session
from bs4 import BeautifulSoup as bsoup
from flask import jsonify, make_response, request, redirect, render_template, \
send_file, session, url_for, g
@ -36,7 +30,9 @@ from requests import exceptions
from requests.models import PreparedRequest
from cryptography.fernet import Fernet, InvalidToken
from cryptography.exceptions import InvalidSignature
from werkzeug.datastructures import MultiDict
# Load DDG bang json files only on init
bang_json = json.load(open(app.config['BANG_FILE'])) or {}
ac_var = 'WHOOGLE_AUTOCOMPLETE'
autocomplete_enabled = os.getenv(ac_var, '1')
@ -51,14 +47,6 @@ def get_search_name(tbm):
def auth_required(f):
@wraps(f)
def decorated(*args, **kwargs):
# do not ask password if cookies already present
if (
valid_user_session(session)
and 'cookies_disabled' not in request.args
and session['auth']
):
return f(*args, **kwargs)
auth = request.authorization
# Skip if username/password not set
@ -68,7 +56,6 @@ def auth_required(f):
auth
and whoogle_user == auth.username
and whoogle_pass == auth.password):
session['auth'] = True
return f(*args, **kwargs)
else:
return make_response('Not logged in', 401, {
@ -80,16 +67,11 @@ def auth_required(f):
def session_required(f):
@wraps(f)
def decorated(*args, **kwargs):
if not valid_user_session(session):
if (valid_user_session(session)):
g.session_key = session['key']
else:
session.pop('_permanent', None)
# Note: This sets all requests to use the encryption key determined per
# instance on app init. This can be updated in the future to use a key
# that is unique for their session (session['key']) but this should use
# a config setting to enable the session based key. Otherwise there can
# be problems with searches performed by users with cookies blocked if
# a session based key is always used.
g.session_key = app.enc_key
g.session_key = app.default_key
# Clear out old sessions
invalid_sessions = []
@ -129,12 +111,12 @@ def session_required(f):
@app.before_request
def before_request_func():
global bang_json
session.permanent = True
# Check for latest version if needed
now = datetime.now()
needs_update_check = now - timedelta(hours=24) > app.config['LAST_UPDATE_CHECK']
if read_config_bool('WHOOGLE_UPDATE_CHECK', True) and needs_update_check:
if now - timedelta(hours=24) > app.config['LAST_UPDATE_CHECK']:
app.config['LAST_UPDATE_CHECK'] = now
app.config['HAS_UPDATE'] = check_for_update(
app.config['RELEASES_URL'],
@ -148,18 +130,14 @@ def before_request_func():
if os.path.exists(app.config['DEFAULT_CONFIG']) else {}
# Generate session values for user if unavailable
if not valid_user_session(session):
if (not valid_user_session(session)):
session['config'] = default_config
session['uuid'] = str(uuid.uuid4())
session['key'] = app.enc_key
session['auth'] = False
session['key'] = generate_user_key()
# Establish config values per user session
g.user_config = Config(**session['config'])
# Update user config if specified in search args
g.user_config = g.user_config.from_params(g.request_params)
if not g.user_config.url:
g.user_config.url = get_request_url(request.url_root)
@ -170,12 +148,20 @@ def before_request_func():
g.app_location = g.user_config.url
# Attempt to reload bangs json if not generated yet
if not bang_json and os.path.getsize(app.config['BANG_FILE']) > 4:
try:
bang_json = json.load(open(app.config['BANG_FILE']))
except json.decoder.JSONDecodeError:
# Ignore decoding error, can occur if file is still
# being written
pass
@app.after_request
def after_request_func(resp):
resp.headers['X-Content-Type-Options'] = 'nosniff'
resp.headers['X-Frame-Options'] = 'DENY'
resp.headers['Cache-Control'] = 'max-age=86400'
if os.getenv('WHOOGLE_CSP', False):
resp.headers['Content-Security-Policy'] = app.config['CSP']
@ -207,11 +193,13 @@ def index():
session['error_message'] = ''
return render_template('error.html', error_message=error_message)
# Update user config if specified in search args
g.user_config = g.user_config.from_params(g.request_params)
return render_template('index.html',
has_update=app.config['HAS_UPDATE'],
languages=app.config['LANGUAGES'],
countries=app.config['COUNTRIES'],
time_periods=app.config['TIME_PERIODS'],
themes=app.config['THEMES'],
autocomplete_enabled=autocomplete_enabled,
translation=app.config['TRANSLATIONS'][
@ -246,7 +234,8 @@ def opensearch():
main_url=opensearch_url,
request_type='' if get_only else 'method="post"',
search_type=request.args.get('tbm'),
search_name=get_search_name(request.args.get('tbm'))
search_name=get_search_name(request.args.get('tbm')),
preferences=g.user_config.preferences
), 200, {'Content-Type': 'application/xml'}
@ -271,7 +260,8 @@ def autocomplete():
# Search bangs if the query begins with "!", but not "! " (feeling lucky)
if q.startswith('!') and len(q) > 1 and not q.startswith('! '):
return jsonify([q, suggest_bang(q)])
return jsonify([q, [bang_json[_]['suggestion'] for _ in bang_json if
_.startswith(q)]])
if not q and not request.data:
return jsonify({'?': []})
@ -288,21 +278,18 @@ def autocomplete():
g.user_request.autocomplete(q) if not g.user_config.tor else []
])
@app.route(f'/{Endpoint.search}', methods=['GET', 'POST'])
@session_required
@auth_required
def search():
if request.method == 'POST':
# Redirect as a GET request with an encrypted query
post_data = MultiDict(request.form)
post_data['q'] = encrypt_string(g.session_key, post_data['q'])
get_req_str = urlparse.urlencode(post_data)
return redirect(url_for('.search') + '?' + get_req_str)
# Update user config if specified in search args
g.user_config = g.user_config.from_params(g.request_params)
search_util = Search(request, g.user_config, g.session_key)
query = search_util.new_search_query()
bang = resolve_bang(query)
bang = resolve_bang(query, bang_json)
if bang:
return redirect(bang)
@ -329,16 +316,8 @@ def search():
translation = app.config['TRANSLATIONS'][localization_lang]
translate_to = localization_lang.replace('lang_', '')
# removing st-card to only use whoogle time selector
soup = bsoup(response, "html.parser");
for x in soup.find_all(attrs={"id": "st-card"}):
x.replace_with("")
response = str(soup)
# Return 503 if temporarily blocked by captcha
if has_captcha(str(response)):
app.logger.error('503 (CAPTCHA)')
return render_template(
'error.html',
blocked=True,
@ -348,16 +327,12 @@ def search():
config=g.user_config,
query=urlparse.unquote(query),
params=g.user_config.to_params(keys=['preferences'])), 503
response = bold_search_terms(response, query)
# check for widgets and add if requested
if search_util.widget != '':
# Feature to display IP address
if search_util.check_kw_ip():
html_soup = bsoup(str(response), 'html.parser')
if search_util.widget == 'ip':
response = add_ip_card(html_soup, get_client_ip(request))
elif search_util.widget == 'calculator' and not 'nojs' in request.args:
response = add_calculator_card(html_soup)
response = add_ip_card(html_soup, get_client_ip(request))
# Update tabs content
tabs = get_tabs_content(app.config['HEADER_TABS'],
@ -367,8 +342,6 @@ def search():
translation)
# Feature to display currency_card
# Since this is determined by more than just the
# query is it not defined as a standard widget
conversion = check_currency(str(response))
if conversion:
html_soup = bsoup(str(response), 'html.parser')
@ -376,7 +349,6 @@ def search():
preferences = g.user_config.preferences
home_url = f"home?preferences={preferences}" if preferences else "home"
cleanresponse = str(response).replace("andlt;","&lt;").replace("andgt;","&gt;")
return render_template(
'display.html',
@ -397,7 +369,7 @@ def search():
is_translation=any(
_ in query.lower() for _ in [translation['translate'], 'translate']
) and not search_util.search_type, # Standard search queries only
response=cleanresponse,
response=response,
version_number=app.config['VERSION_NUMBER'],
search_header=render_template(
'header.html',
@ -406,12 +378,11 @@ def search():
translation=translation,
languages=app.config['LANGUAGES'],
countries=app.config['COUNTRIES'],
time_periods=app.config['TIME_PERIODS'],
logo=render_template('logo.html', dark=g.user_config.dark),
query=urlparse.unquote(query),
search_type=search_util.search_type,
mobile=g.user_request.mobile,
tabs=tabs)).replace(" ", "")
tabs=tabs))
@app.route(f'/{Endpoint.config}', methods=['GET', 'POST', 'PUT'])
@ -421,18 +392,13 @@ def config():
config_disabled = (
app.config['CONFIG_DISABLE'] or
not valid_user_session(session))
name = ''
if 'name' in request.args:
name = os.path.normpath(request.args.get('name'))
if not re.match(r'^[A-Za-z0-9_.+-]+$', name):
return make_response('Invalid config name', 400)
if request.method == 'GET':
return json.dumps(g.user_config.__dict__)
elif request.method == 'PUT' and not config_disabled:
if name:
config_pkl = os.path.join(app.config['CONFIG_PATH'], name)
if 'name' in request.args:
config_pkl = os.path.join(
app.config['CONFIG_PATH'],
request.args.get('name'))
session['config'] = (pickle.load(open(config_pkl, 'rb'))
if os.path.exists(config_pkl)
else session['config'])
@ -450,7 +416,7 @@ def config():
config_data,
open(os.path.join(
app.config['CONFIG_PATH'],
name), 'wb'))
request.args.get('name')), 'wb'))
session['config'] = config_data
return redirect(config_data['url'])
@ -481,23 +447,8 @@ def element():
src_type = request.args.get('type')
# Ensure requested element is from a valid domain
domain = urlparse.urlparse(src_url).netloc
if not validators.domain(domain):
return send_file(io.BytesIO(empty_gif), mimetype='image/gif')
try:
response = g.user_request.send(base_url=src_url)
# Display an empty gif if the requested element couldn't be retrieved
if response.status_code != 200 or len(response.content) == 0:
if 'favicon' in src_url:
favicon = fetch_favicon(src_url)
return send_file(io.BytesIO(favicon), mimetype='image/png')
else:
return send_file(io.BytesIO(empty_gif), mimetype='image/gif')
file_data = response.content
file_data = g.user_request.send(base_url=src_url).content
tmp_mem = io.BytesIO()
tmp_mem.write(file_data)
tmp_mem.seek(0)
@ -506,6 +457,8 @@ def element():
except exceptions.RequestException:
pass
empty_gif = base64.b64decode(
'R0lGODlhAQABAIAAAP///////yH5BAEKAAEALAAAAAABAAEAAAICTAEAOw==')
return send_file(io.BytesIO(empty_gif), mimetype='image/gif')
@ -523,13 +476,6 @@ def window():
root_url=request.url_root,
config=g.user_config)
target = urlparse.urlparse(target_url)
# Ensure requested URL has a valid domain
if not validators.domain(target.netloc):
return render_template(
'error.html',
error_message='Invalid location'), 400
host_url = f'{target.scheme}://{target.netloc}'
get_body = g.user_request.send(base_url=target_url).text
@ -583,54 +529,6 @@ def window():
)
@app.route('/robots.txt')
def robots():
response = make_response(
'''User-Agent: *
Disallow: /''', 200)
response.mimetype = 'text/plain'
return response
@app.route('/favicon.ico')
def favicon():
return app.send_static_file('img/favicon.ico')
@app.errorhandler(404)
def page_not_found(e):
return render_template('error.html', error_message=str(e)), 404
@app.errorhandler(Exception)
def internal_error(e):
query = ''
if request.method == 'POST':
query = request.form.get('q')
else:
query = request.args.get('q')
# Attempt to parse the query
try:
search_util = Search(request, g.user_config, g.session_key)
query = search_util.new_search_query()
except Exception:
pass
print(traceback.format_exc(), file=sys.stderr)
localization_lang = g.user_config.get_localization_lang()
translation = app.config['TRANSLATIONS'][localization_lang]
return render_template(
'error.html',
error_message='Internal server error (500)',
translation=translation,
farside='https://farside.link',
config=g.user_config,
query=urlparse.unquote(query),
params=g.user_config.to_params(keys=['preferences'])), 500
def run_app() -> None:
parser = argparse.ArgumentParser(
description='Whoogle Search console runner')
@ -649,11 +547,6 @@ def run_app() -> None:
default='',
metavar='</path/to/unix.sock>',
help='Listen for app on unix socket instead of host:port')
parser.add_argument(
'--unix-socket-perms',
default='600',
metavar='<octal permissions>',
help='Octal permissions to use for the Unix domain socket (default 600)')
parser.add_argument(
'--debug',
default=False,
@ -705,7 +598,7 @@ def run_app() -> None:
if args.debug:
app.run(host=args.host, port=args.port, debug=args.debug)
elif args.unix_socket:
waitress.serve(app, unix_socket=args.unix_socket, unix_socket_perms=args.unix_socket_perms)
waitress.serve(app, unix_socket=args.unix_socket)
else:
waitress.serve(
app,

@ -1,14 +0,0 @@
{
"!i": {
"url": "search?q={}&tbm=isch",
"suggestion": "!i (Whoogle Images)"
},
"!v": {
"url": "search?q={}&tbm=vid",
"suggestion": "!v (Whoogle Videos)"
},
"!n": {
"url": "search?q={}&tbm=nws",
"suggestion": "!n (Whoogle News)"
}
}

@ -58,26 +58,6 @@ details summary span {
text-align: center;
}
.site-favicon {
float: left;
width: 25px;
padding-right: 5px;
}
.has-favicon .sCuL3 {
padding-left: 30px;
}
#flex_text_audio_icon_chunk {
display: none;
}
audio {
display: block;
margin-right: auto;
padding-bottom: 5px;
}
@media (min-width: 801px) {
body {
min-width: 736px !important;

@ -5,7 +5,7 @@
--whoogle-page-bg: #ffffff;
--whoogle-element-bg: #4285f4;
--whoogle-text: #000000;
--whoogle-contrast-text: #ffffff;
--whoogle-contrast-text: #70757a;
--whoogle-secondary-text: #70757a;
--whoogle-result-bg: #ffffff;
--whoogle-result-title: #1967d2;

@ -21,6 +21,16 @@ const handleUserInput = () => {
xhrRequest.send('q=' + searchInput.value);
};
const closeAllLists = el => {
// Close all autocomplete suggestions
let suggestions = document.getElementsByClassName("autocomplete-items");
for (let i = 0; i < suggestions.length; i++) {
if (el !== suggestions[i] && el !== searchInput) {
suggestions[i].parentNode.removeChild(suggestions[i]);
}
}
};
const removeActive = suggestion => {
// Remove "autocomplete-active" class from previously active suggestion
for (let i = 0; i < suggestion.length; i++) {
@ -61,7 +71,7 @@ const addActive = (suggestion) => {
const autocompleteInput = (e) => {
// Handle navigation between autocomplete suggestions
let suggestion = document.getElementById("autocomplete-list");
let suggestion = document.getElementById(this.id + "-autocomplete-list");
if (suggestion) suggestion = suggestion.getElementsByTagName("div");
if (e.keyCode === 40) { // down
e.preventDefault();
@ -82,28 +92,29 @@ const autocompleteInput = (e) => {
};
const updateAutocompleteList = () => {
let autocompleteItem, i;
let autocompleteList, autocompleteItem, i;
let val = originalSearch;
let autocompleteList = document.getElementById("autocomplete-list");
autocompleteList.innerHTML = "";
closeAllLists();
if (!val || !autocompleteResults) {
return false;
}
currentFocus = -1;
autocompleteList = document.createElement("div");
autocompleteList.setAttribute("id", this.id + "-autocomplete-list");
autocompleteList.setAttribute("class", "autocomplete-items");
searchInput.parentNode.appendChild(autocompleteList);
for (i = 0; i < autocompleteResults.length; i++) {
if (autocompleteResults[i].substr(0, val.length).toUpperCase() === val.toUpperCase()) {
autocompleteItem = document.createElement("div");
autocompleteItem.setAttribute("class", "autocomplete-item");
autocompleteItem.innerHTML = "<strong>" + autocompleteResults[i].substr(0, val.length) + "</strong>";
autocompleteItem.innerHTML += autocompleteResults[i].substr(val.length);
autocompleteItem.innerHTML += "<input type=\"hidden\" value=\"" + autocompleteResults[i] + "\">";
autocompleteItem.addEventListener("click", function () {
searchInput.value = this.getElementsByTagName("input")[0].value;
autocompleteList.innerHTML = "";
closeAllLists();
document.getElementById("search-form").submit();
});
autocompleteList.appendChild(autocompleteItem);
@ -112,16 +123,10 @@ const updateAutocompleteList = () => {
};
document.addEventListener("DOMContentLoaded", function() {
let autocompleteList = document.createElement("div");
autocompleteList.setAttribute("id", "autocomplete-list");
autocompleteList.setAttribute("class", "autocomplete-items");
searchInput = document.getElementById("search-bar");
searchInput.parentNode.appendChild(autocompleteList);
searchInput.addEventListener("keydown", (event) => autocompleteInput(event));
document.addEventListener("click", function (e) {
autocompleteList.innerHTML = "";
closeAllLists(e.target);
});
});
});

@ -3,7 +3,6 @@ document.addEventListener("DOMContentLoaded", () => {
const advSearchDiv = document.getElementById("adv-search-div");
const searchBar = document.getElementById("search-bar");
const countrySelect = document.getElementById("result-country");
const timePeriodSelect = document.getElementById("result-time-period");
const arrowKeys = [37, 38, 39, 40];
let searchValue = searchBar.value;
@ -11,32 +10,12 @@ document.addEventListener("DOMContentLoaded", () => {
let str = window.location.href;
n = str.lastIndexOf("/search");
if (n > 0) {
str = str.substring(0, n) + `/search?q=${searchBar.value}`;
str = tackOnParams(str);
str = str.substring(0, n) +
`/search?q=${searchBar.value}&country=${countrySelect.value}`;
window.location.href = str;
}
}
timePeriodSelect.onchange = () => {
let str = window.location.href;
n = str.lastIndexOf("/search");
if (n > 0) {
str = str.substring(0, n) + `/search?q=${searchBar.value}`;
str = tackOnParams(str);
window.location.href = str;
}
}
function tackOnParams(str) {
if (timePeriodSelect.value != "") {
str = str + `&tbs=${timePeriodSelect.value}`;
}
if (countrySelect.value != "") {
str = str + `&country=${countrySelect.value}`;
}
return str;
}
const toggleAdvancedSearch = on => {
if (on) {
advSearchDiv.style.maxHeight = "70px";

@ -52,10 +52,6 @@
}
function focusSearch () {
if (window.usingCalculator) {
// if this function exists, it means the calculator widget has been displayed
if (usingCalculator()) return;
}
activeIdx = -1;
searchBar.focus();
}

@ -13,7 +13,7 @@
},
"maps": {
"tbm": null,
"href": "https://maps.google.com/maps?q={map_query}",
"href": "https://maps.google.com/maps?q={query}",
"name": "Maps",
"selected": false
},

@ -4,7 +4,6 @@
{"name": "Afrikaans (Afrikaans)", "value": "lang_af"},
{"name": "Arabic (عربى)", "value": "lang_ar"},
{"name": "Armenian (հայերեն)", "value": "lang_hy"},
{"name": "Azerbaijani (Azərbaycanca)", "value": "lang_az"},
{"name": "Belarusian (Беларуская)", "value": "lang_be"},
{"name": "Bulgarian (български)", "value": "lang_bg"},
{"name": "Catalan (Català)", "value": "lang_ca"},

@ -1,8 +0,0 @@
[
{"name": "Any time", "value": ""},
{"name": "Past hour", "value": "qdr:h"},
{"name": "Past 24 hours", "value": "qdr:d"},
{"name": "Past week", "value": "qdr:w"},
{"name": "Past month", "value": "qdr:m"},
{"name": "Past year", "value": "qdr:y"}
]

@ -1,6 +1,5 @@
{
"lang_en": {
"": "--",
"search": "Search",
"config": "Configuration",
"config-country": "Country",
@ -20,7 +19,7 @@
"config-dark": "Dark Mode",
"config-safe": "Safe Search",
"config-alts": "Replace Social Media Links",
"config-alts-help": "Replaces Twitter/YouTube/etc links with privacy respecting alternatives.",
"config-alts-help": "Replaces Twitter/YouTube/Instagram/etc links with privacy respecting alternatives.",
"config-new-tab": "Open Links in New Tab",
"config-images": "Full Size Image Search",
"config-images-help": "(Experimental) Adds the 'View Image' option to desktop image searches. This will cause image result thumbnails to be lower resolution.",
@ -31,7 +30,6 @@
"config-pref-encryption": "Encrypt Preferences",
"config-pref-help": "Requires WHOOGLE_CONFIG_PREFERENCES_KEY, otherwise this will be ignored.",
"config-css": "Custom CSS",
"config-time-period": "Time Period",
"load": "Load",
"apply": "Apply",
"save-as": "Save As...",
@ -48,12 +46,7 @@
"videos": "Videos",
"news": "News",
"books": "Books",
"anon-view": "Anonymous View",
"qdr:h": "Past hour",
"qdr:d": "Past 24 hours",
"qdr:w": "Past week",
"qdr:m": "Past month",
"qdr:y": "Past year"
"anon-view": "Anonymous View"
},
"lang_nl": {
"search": "Zoeken",
@ -75,7 +68,7 @@
"config-dark": "Donkere Modus",
"config-safe": "Veilig zoeken",
"config-alts": "Social Media Links Vervangen",
"config-alts-help": "Vervang Twitter/YouTube/etc links met privacy gerespecteerde alternatieve.",
"config-alts-help": "Vervang Twitter/YouTube/Instagram/etc links met privacy gerespecteerde alternatieve.",
"config-new-tab": "Open Links in New Tab",
"config-images": "Volledige Grote Afbeelding Zoeken",
"config-images-help": "(Expirimenteel) Voegt de optie 'View Image' toe aan desktop afbeeldingen zoeken. Dit zorgt ervoor dat de voorbeeld foto's kleiner zijn.",
@ -102,14 +95,7 @@
"videos": "Videos",
"news": "Nieuws",
"books": "Boeken",
"anon-view": "Anonieme Weergave",
"": "--",
"qdr:h": "Afgelopen uur",
"qdr:d": "Afgelopen 24 uur",
"qdr:w": "Vorige week",
"qdr:m": "Afgelopen maand",
"qdr:y": "Afgelopen jaar",
"config-time-period": "Tijdsperiode"
"anon-view": "Anonieme Weergave"
},
"lang_de": {
"search": "Suchen",
@ -131,7 +117,7 @@
"config-dark": "Dark Mode",
"config-safe": "Sicheres Suchen",
"config-alts": "Social-Media-Links ersetzen",
"config-alts-help": "Ersetzt Twitter/YouTube/etc Links mit Alternativen, welche die Privatsphäre respektieren.",
"config-alts-help": "Ersetzt Twitter/YouTube/Instagram/etc Links mit Alternativen, welche die Privatsphäre respektieren.",
"config-new-tab": "Links in neuen Tabs öffnen",
"config-images": "Bilder-Suche in Vollbild",
"config-images-help": "(Experimentell) Fügt 'View Image'-Einstellung zu Dekstop Bilder-Suchen hinzu. Dadurch werden Thumbnails in niedrigerer Auflösung angezeigt.",
@ -158,14 +144,7 @@
"videos": "Videos",
"news": "Nachrichten",
"books": "Bücher",
"anon-view": "Anonyme Ansicht",
"": "--",
"qdr:h": "Letzte Stunde",
"qdr:d": "Vergangene 24 Stunden",
"qdr:w": "Letzte Woche",
"qdr:m": "Letzten Monat",
"qdr:y": "Vergangenes Jahr",
"config-time-period": "Zeitraum"
"anon-view": "Anonyme Ansicht"
},
"lang_es": {
"search": "Buscar",
@ -187,7 +166,7 @@
"config-dark": "Modo Oscuro",
"config-safe": "Búsqueda Segura",
"config-alts": "Reemplazar Enlaces de Redes Sociales",
"config-alts-help": "Reemplaza los enlaces de Twitter/YouTube/etc con alternativas que respetan la privacidad.",
"config-alts-help": "Reemplaza los enlaces de Twitter/YouTube/Instagram/etc con alternativas que respetan la privacidad.",
"config-new-tab": "Abrir enlaces en una pestaña nueva",
"config-images": "Búsqueda de imágenes a tamaño completo",
"config-images-help": "(Experimental) Agrega la opción 'Ver imagen' a las búsquedas de imágenes de escritorio. Esto hará que las miniaturas de los resultados de la imagen aparezcan con una resolución más baja.",
@ -214,70 +193,7 @@
"videos": "Vídeos",
"news": "Noticias",
"books": "Libros",
"anon-view": "Vista Anónima",
"": "--",
"qdr:h": "Hora pasada",
"qdr:d": "últimas 24 horas",
"qdr:w": "Semana pasada",
"qdr:m": "El mes pasado",
"qdr:y": "Año pasado",
"config-time-period": "Periodo de tiempo"
},
"lang_id": {
"": "--",
"search": "Telusuri",
"config": "Konfigurasi",
"config-country": "Negara",
"config-lang": "Bahasa Antarmuka",
"config-lang-search": "Bahasa Penelusuran",
"config-near": "Dekat",
"config-near-help": "Nama Kota",
"config-block": "Blokir",
"config-block-help": "Daftar situs yang dipisahkan dengan koma",
"config-block-title": "Blokir berdasarkan Judul",
"config-block-title-help": "Gunakan regex",
"config-block-url": "Blokir berdasarkan URL",
"config-block-url-help": "Gunakan regex",
"config-theme": "Tema",
"config-nojs": "Hapus Javascript dalam Tampilan Anonim",
"config-anon-view": "Tampilkan Tautan Tampilan Anonim",
"config-dark": "Mode Gelap",
"config-safe": "Pencarian Aman",
"config-alts": "Ganti Tautan Media Sosial",
"config-alts-help": "Mengganti tautan Twitter/YouTube/dll dengan alternatif yang lebih menjaga privasi.",
"config-new-tab": "Buka Tautan dalam Tab Baru",
"config-images": "Pencarian Gambar Ukuran Penuh",
"config-images-help": "(Eksperimental) Menambahkan opsi 'Lihat Gambar' ke pencarian gambar desktop. Ini akan menyebabkan resolusi thumbnail hasil gambar menjadi lebih rendah.",
"config-tor": "Gunakan Tor",
"config-get-only": "Hanya Gunakan GET",
"config-url": "URL Dasar",
"config-pref-url": "URL Preferensi",
"config-pref-encryption": "Enkripsi Preferensi",
"config-pref-help": "Memerlukan WHOOGLE_CONFIG_PREFERENCES_KEY, jika tidak akan diabaikan.",
"config-css": "CSS Kustom",
"config-time-period": "Periode Waktu",
"load": "Muat",
"apply": "Terapkan",
"save-as": "Simpan Sebagai...",
"github-link": "Lihat di GitHub",
"translate": "terjemahkan",
"light": "terang",
"dark": "gelap",
"system": "sistem",
"ratelimit": "Instansi telah ratelimited",
"continue-search": "Lanjutkan penelusuran Anda dengan Farside",
"all": "Semua",
"images": "Gambar",
"maps": "Peta",
"videos": "Video",
"news": "Berita",
"books": "Buku",
"anon-view": "Tampilan Anonim",
"qdr:h": "1 jam yang lalu",
"qdr:d": "24 jam yang lalu",
"qdr:w": "1 minggu yang lalu",
"qdr:m": "1 bulan yang lalu",
"qdr:y": "1 tahun yang lalu"
"anon-view": "Vista Anónima"
},
"lang_it": {
"search": "Cerca",
@ -299,7 +215,7 @@
"config-dark": "Modalità Notte",
"config-safe": "Ricerca Sicura",
"config-alts": "Sostituisci link dei social",
"config-alts-help": "Sostituisci link di Twitter/YouTube/etc con alternative che rispettano la privacy.",
"config-alts-help": "Sostituisci link di Twitter/YouTube/Instagram/etc con alternative che rispettano la privacy.",
"config-new-tab": "Apri i link in una nuova scheda",
"config-images": "Ricerca Immagini",
"config-images-help": "(Sperimentale) Aggiunge la modalità 'Ricerca Immagini'. Questo ridurrà drasticamente la qualità delle miniature durante la ricerca.",
@ -326,14 +242,7 @@
"videos": "Video",
"news": "Notizie",
"books": "Libri",
"anon-view": "Vista Anonima",
"": "--",
"qdr:h": "Ultima ora",
"qdr:d": "Ultime 24 ore",
"qdr:w": "Settimana scorsa",
"qdr:m": "Mese scorso",
"qdr:y": "L'anno scorso",
"config-time-period": "Periodo di tempo"
"anon-view": "Vista Anonima"
},
"lang_pt": {
"search": "Pesquisar",
@ -355,7 +264,7 @@
"config-dark": "Modo Escuro",
"config-safe": "Pesquisa Segura",
"config-alts": "Substituir Links de Redes Sociais",
"config-alts-help": "Substitui os links do Twitter/YouTube/etc. por alternativas que respeitam sua privacidade.",
"config-alts-help": "Substitui os links do Twitter/YouTube/Instagram/etc. por alternativas que respeitam sua privacidade.",
"config-new-tab": "Abrir Links em Nova Aba",
"config-images": "Pesquisa de Imagem em Tamanho Real",
"config-images-help": "(Experimental) Adiciona a opção 'Mostrar Imagem' às pesquisas de imagens no modo 'para computador'. Isso fará com que as miniaturas do resultado da imagem sejam de menor resolução.",
@ -382,14 +291,7 @@
"videos": "Vídeos",
"news": "Notícias",
"books": "Livros",
"anon-view": "Visualização Anônima",
"": "--",
"qdr:h": "Hora passada",
"qdr:d": "últimas 24 horas",
"qdr:w": "Semana passada",
"qdr:m": "Mês passado",
"qdr:y": "Ano passado",
"config-time-period": "Período de tempo"
"anon-view": "Visualização Anônima"
},
"lang_ru": {
"search": "Поиск",
@ -411,7 +313,7 @@
"config-dark": "Тёмный режим",
"config-safe": "Безопасный поиск",
"config-alts": "Заменить ссылки на социальные сети",
"config-alts-help": "Замена ссылкок Twitter, YouTube, и т.д. на альтернативы, уважающие конфиденциальность.",
"config-alts-help": "Замена ссылкок Twitter, YouTube, Instagram и т.д. на альтернативы, уважающие конфиденциальность.",
"config-new-tab": "Открывать ссылки в новой вкладке",
"config-images": "Поиск полноразмерных изображений",
"config-images-help": "(Эксперимент) Добавляет опцию 'Просмотр изображения' к поиску изображений в ПК-режиме. Это приведет к тому, что миниатюры изображений будут иметь более низкое разрешение.",
@ -438,14 +340,7 @@
"videos": "Видео",
"news": "Новости",
"books": "Книги",
"anon-view": "Анонимный просмотр",
"": "--",
"qdr:h": "Прошедший час",
"qdr:d": "Последние 24 часа",
"qdr:w": "На прошлой неделе",
"qdr:m": "Прошлый месяц",
"qdr:y": "Прошлый год",
"config-time-period": "Временной период"
"anon-view": "Анонимный просмотр"
},
"lang_zh-CN": {
"search": "搜索",
@ -467,7 +362,7 @@
"config-dark": "深色模式",
"config-safe": "安全搜索",
"config-alts": "替换社交媒体链接",
"config-alts-help": "使用尊重隐私的第三方网站替换 Twitter/YouTube 等链接。",
"config-alts-help": "使用尊重隐私的第三方网站替换 Twitter/YouTube/Instagram 等链接。",
"config-new-tab": "在新标签页打开链接",
"config-images": "完整尺寸图片搜索",
"config-images-help": "(实验性)为桌面版图片搜索添加“查看图片”选项。这会降低图片结果缩略图的分辨率。",
@ -494,14 +389,7 @@
"videos": "视频",
"news": "新闻",
"books": "书籍",
"anon-view": "匿名视图",
"": "--",
"qdr:h": "过去一小时",
"qdr:d": "过去 24 小时",
"qdr:w": "上周",
"qdr:m": "过去一个月",
"qdr:y": "过去一年",
"config-time-period": "时间段"
"anon-view": "匿名视图"
},
"lang_si": {
"search": "සොයන්න",
@ -550,14 +438,7 @@
"videos": "වීඩියෝ",
"news": "අනුරූප",
"books": "පොත්",
"anon-view": "නිර්නාමික දසුන",
"": "--",
"qdr:h": "පසුගිය පැය",
"qdr:d": "පසුගිය පැය 24",
"qdr:w": "පසුගිය සතිය",
"qdr:m": "පසුගිය මාසය",
"qdr:y": "පසුගිය වසර",
"config-time-period": "කාල සීමාව"
"anon-view": "නිර්නාමික දසුන"
},
"lang_fr": {
"search": "Chercher",
@ -579,7 +460,7 @@
"config-dark": "Mode Sombre",
"config-safe": "Recherche sécurisée",
"config-alts": "Remplacer les liens des réseaux sociaux",
"config-alts-help": "Remplacer les liens Twitter/YouTube/etc avec leurs alternatives respectueuses de la vie privée.",
"config-alts-help": "Remplacer les liens Twitter/YouTube/Instagram/etc avec leurs alternatives respectueuses de la vie privée.",
"config-new-tab": "Ouvrir les Liens dans un Nouveau Onglet",
"config-images": "Recherche d'image en plein écran",
"config-images-help": "(Expérimental) Ajouter l'option 'Voir Image' aux recherches d'images sur ordinateur. Les vignettes des résultats d'image seront de plus faible résolution.",
@ -606,14 +487,7 @@
"videos": "Vidéos",
"news": "Actualités",
"books": "Livres",
"anon-view": "Vue anonyme",
"": "--",
"qdr:h": "Heure passée",
"qdr:d": "Dernières 24 heures",
"qdr:w": "La semaine dernière",
"qdr:m": "Mois passé",
"qdr:y": "L'année passée",
"config-time-period": "Période de temps"
"anon-view": "Vue anonyme"
},
"lang_fa": {
"search": "جستجو",
@ -662,14 +536,7 @@
"videos": "ویدئوها",
"news": "اخبار",
"books": "کتاب‌ها",
"anon-view": "نمای ناشناس",
"": "--",
"qdr:h": "ساعت گذشته",
"qdr:d": "24 ساعت گذشته",
"qdr:w": "هفته گذشته",
"qdr:m": "ماه گذشته",
"qdr:y": "سال گذشته",
"config-time-period": "بازه زمانی"
"anon-view": "نمای ناشناس"
},
"lang_cs": {
"search": "Hledat",
@ -691,7 +558,7 @@
"config-dark": "Tmavý motiv",
"config-safe": "Bezpečné vyhledávání",
"config-alts": "Nahradit odkazy na sociální média",
"config-alts-help": "Nahradí odkazy na Twitter, YouTube, atd. alternativami respektujícími soukromí.",
"config-alts-help": "Nahradí odkazy na Twitter, YouTube, Instagram atd. alternativami respektujícími soukromí.",
"config-new-tab": "Otevírat odkazy na novém listu",
"config-images": "Vyhledávání obrázků v plné velikosti",
"config-images-help": "(Experimentální) Přidá volbu Zobrazit obrázek do vyhledávání obrázků na ploše. Způsobí to, že náhledy výsledků vyhledávání obrázků budou mít nižší rozlišení.",
@ -718,17 +585,9 @@
"videos": "Videa",
"news": "Zprávy",
"books": "Knihy",
"anon-view": "Anonymní pohled",
"": "--",
"qdr:h": "Poslední hodina",
"qdr:d": "Posledních 24 hodin",
"qdr:w": "Minulý týden",
"qdr:m": "Minulý měsíc",
"qdr:y": "Minulý rok",
"config-time-period": "Časový úsek"
"anon-view": "Anonymní pohled"
},
"lang_zh-TW": {
"": "--",
"search": "搜尋",
"config": "設定",
"config-country": "設定國家",
@ -748,10 +607,10 @@
"config-dark": "深色模式",
"config-safe": "安全搜尋",
"config-alts": "將社群網站連結替換",
"config-alts-help": "將 Twitter/YouTube 等網站之連結替換為尊重隱私的第三方網站。",
"config-alts-help": "將 Twitter/YouTube/Instagram 等網站之連結替換為尊重隱私的第三方網站。",
"config-new-tab": "以新分頁開啟連結",
"config-images": "完整尺寸圖片搜尋",
"config-images-help": "(實驗性)在桌面版圖片搜尋中增加「檢視圖片」選項。這會使搜尋結果圖片解析度降低",
"config-images-help": "(實驗性)在桌面版圖片搜尋中增加「檢視圖片」選項。這會使搜尋結果圖片解析度降低",
"config-tor": "使用 Tor",
"config-get-only": "僅限於 GET 要求",
"config-url": "首頁網址",
@ -759,7 +618,6 @@
"config-pref-encryption": "加密設定",
"config-pref-help": "需要一併設定 WHOOGLE_CONFIG_PREFERENCES_KEY否則將會被忽略。",
"config-css": "自定 CSS",
"config-time-period": "時間範圍",
"load": "載入",
"apply": "套用",
"save-as": "另存為...",
@ -776,12 +634,7 @@
"videos": "影片",
"news": "新聞",
"books": "書籍",
"anon-view": "匿名檢視",
"qdr:h": "過去 1 小時",
"qdr:d": "過去 24 小時",
"qdr:w": "過去 1 週",
"qdr:m": "過去 1 個月",
"qdr:y": "過去 1 年"
"anon-view": "匿名檢視"
},
"lang_bg": {
"search": "Търсене",
@ -803,7 +656,7 @@
"config-dark": "Тъмен режим",
"config-safe": "Безопасно търсене",
"config-alts": "Заменете връзките към социалните медии",
"config-alts-help": "Заменя връзките на Twitter/YouTube и т.н. с защитени алтернативни поверителни връзки.",
"config-alts-help": "Заменя връзките на Twitter/YouTube/Instagram и т.н. с защитени алтернативни поверителни връзки.",
"config-new-tab": "Отваряне на връзките в нов раздел",
"config-images": "Търсене на изображения в пълен размер",
"config-images-help": "(Експериментално) Добавя опцията „Преглед на изображение“ към резултатите от търсене на изображения през работния плот на компютъра. Това ще доведе до по-ниска разделителна способност на миниатюрите, в резултатите от търсене на изображения.",
@ -830,14 +683,7 @@
"videos": "Новини",
"news": "Карти",
"books": "Книги",
"anon-view": "Анонимен изглед",
"": "--",
"qdr:h": "Последния час",
"qdr:d": "Последните 24 часа",
"qdr:w": "Миналата седмица",
"qdr:m": "Миналия месец",
"qdr:y": "Изминалата година",
"config-time-period": "Времеви период"
"anon-view": "Анонимен изглед"
},
"lang_hi": {
"search": "खोज",
@ -886,14 +732,7 @@
"videos": "मैप",
"news": "समाचार",
"books": "किताबें",
"anon-view": "अनाम दृश्य",
"": "--",
"qdr:h": "पिछले घंटे",
"qdr:d": "पिछले 24 घंटे",
"qdr:w": "पिछले सप्ताह",
"qdr:m": "पिछले महीने",
"qdr:y": "पिछला वर्ष",
"config-time-period": "समय सीमा"
"anon-view": "अनाम दृश्य"
},
"lang_ja": {
"search": "検索",
@ -915,7 +754,7 @@
"config-dark": "ダークモード",
"config-safe": "セーフサーチ",
"config-alts": "ソーシャルメディアのリンクを置き換え",
"config-alts-help": "Twitter/YouTubeなどのリンクを、プライバシーを尊重した代替サイトに置き換えます。",
"config-alts-help": "Twitter/YouTube/Instagramなどのリンクを、プライバシーを尊重した代替サイトに置き換えます。",
"config-new-tab": "新しいタブでリンクを開く",
"config-images": "フルサイズの画像を検索",
"config-images-help": "(実験的) デスクトップの画像検索に「画像を表示」オプションを追加します。これにより、画像検索結果のサムネイルの解像度が低くなります。",
@ -942,14 +781,7 @@
"videos": "動画",
"news": "ニュース",
"books": "書籍",
"anon-view": "匿名ビュー",
"": "--",
"qdr:h": "過去 1 時間",
"qdr:d": "過去 24 時間",
"qdr:w": "この1週間",
"qdr:m": "先月",
"qdr:y": "過年度",
"config-time-period": "期間"
"anon-view": "匿名ビュー"
},
"lang_ko": {
"search": "검색",
@ -971,7 +803,7 @@
"config-dark": "다크 모드",
"config-safe": "세이프서치",
"config-alts": "소설 미디어 주소 수정",
"config-alts-help": "Twitter/YouTube 등의 링크를 프라이버시를 존중하는 링크로 대체합니다",
"config-alts-help": "Twitter/YouTube/Instagram 등의 링크를 프라이버시를 존중하는 링크로 대체합니다",
"config-new-tab": "새 탭에서 열기",
"config-images": "최대 크기 이미지 검색",
"config-images-help": "(실험적) 데스크톱 이미지 검색에 '이미지 보기' 옵션을 추가합니다. 이미지 결과 미리보기 썸네일이 낮은 해상도로 표시됩니다.",
@ -998,45 +830,38 @@
"videos": "동영상",
"news": "뉴스",
"books": "도서",
"anon-view": "익명 보기",
"": "--",
"qdr:h": "지난 시간",
"qdr:d": "지난 24시간",
"qdr:w": "지난 주",
"qdr:m": "지난달",
"qdr:y": "지난 해",
"config-time-period": "기간"
"anon-view": "익명 보기"
},
"lang_ku": {
"search": "Bigere",
"config": "Sazkarî",
"search": "Lêgerîn",
"config": "Pevsazî",
"config-country": "Welat",
"config-lang": "Zimanê Navrûyê",
"config-lang-search": "Zimanê Lêgerînê",
"config-near": "Nêzîk",
"config-near-help": "Navê Bajêr",
"config-block": "Astengkirin",
"config-block-help": "Rêzoka malperê ya ji hev veqetandî bi riya bêhnok",
"config-block-help": "Lîsteya malperê ya ji hev veqetandî bi rêya bêhnok",
"config-block-title": "Bi ya Sernavê Asteng bike",
"config-block-title-help": "regex bi kar bîne",
"config-block-url": "Bi ya Girêdanê asteng bike",
"config-block-url": "Bi ya URL asteng bike",
"config-block-url-help": "regex bi kar bîne",
"config-theme": "Rûkar",
"config-nojs": "Javascript Rake di Nîşandanên Nenenas de",
"config-anon-view": "Girêdanên Nenas Nîşan bide",
"config-dark": "Awaya Tarî",
"config-safe": "Lêgerîna Parastî",
"config-alts": "Girêdanên Tora Civakî Biguherîne",
"config-alts-help": "Girêdanên Twitter/YouTube/hwd biguherîne bi alternatîvên ku ji taybetiyê re rêzê digrin.",
"config-alts": "Girêdanên Medya Civakî Biguherîne",
"config-alts-help": "Girêdanên Twitter/YouTube/Instagram/hwd biguherîne bi alternatîvên ku ji taybetiyê re rêzê digrin.",
"config-new-tab": "Girêdanan di Rûgereke Nû de Veke",
"config-images": "Lêgerîna Wêne bi Mezinahiya Tevahî",
"config-images-help": "(Ezmûnî) Vebijêrka 'Wêneyê Nîşan bide' tevlî lêgerînên wêneyê yê sermaseyê bike. Ev ê bibe sedem ku çareseriya encamê wêneyên nîşanê kêmtir bibe.",
"config-images-help": "(Ezmûnî) Vebijêrka 'Wêneyê Nîşan bide' tevlî lêgerînên wêneyê yê sermaseyê bike. Ev ê bibe sedem ku encamê çareseriya wêneyn nîşanê kêmtir bibe.",
"config-tor": "Tor bi kar bîne",
"config-get-only": "Daxwazan bi Dest Bixe",
"config-url": "Rêgeha girêdanê",
"config-pref-url": "Vebijêrkên girêdanê",
"config-pref-encryption": "Vebijêrkan şîfre bike",
"config-pref-help": "Pêdivî bi WHOOGLE_CONFIG_PREFERENCES_KEY dike, wekî din ev ê were paşguhkirin.",
"config-url": "Reha URL",
"config-pref-url": "Preferences URL",
"config-pref-encryption": "Vebijêrkên şîfre bikin",
"config-pref-help": "WHOOGLE_CONFIG_PREFERENCES_KEY hewce dike, wekî din ev ê were paşguh kirin.",
"config-css": "CSS kesane bike",
"load": "Bar bike",
"apply": "Bisepîne",
@ -1047,23 +872,16 @@
"dark": "tarî",
"system": "pergal",
"ratelimit": "Mînak bi rêjeya sînorkirî ye",
"continue-search": "Lêgerîna xwe bi Farside re bidomîne",
"continue-search": "Lêgerîna xwe bi Farside bidomîne",
"all": "Hemû",
"images": "Wêne",
"maps": "Nexşe",
"videos": "Vîdyo",
"news": "Nûçe",
"books": "Pirtûk",
"anon-view": "Dîtina Nenas",
"": "--",
"qdr:h": "Demjimêra borî",
"qdr:d": "24 Demjimêrên borî",
"qdr:w": "Hefteya borî",
"qdr:m": "Meha borî",
"qdr:y": "Sala borî",
"config-time-period": "Pêşsazkariyên demê"
"anon-view": "Dîtina Nenas"
},
"lang_th": {
"lang_th": {
"search": "ค้นหา",
"config": "กำหนดค่า",
"config-country": "ประเทศ",
@ -1083,7 +901,7 @@
"config-dark": "โหมดมืด",
"config-safe": "ค้นหาแบบปลอดภัย",
"config-alts": "แทนที่ลิงก์โซเชียลมีเดีย",
"config-alts-help": "แทนที่ลิงก์ Twitter/YouTube/อื่นๆ ตามความเป็นส่วนตัวด้วยทางเลือกอื่น",
"config-alts-help": "แทนที่ลิงก์ Twitter/YouTube/Instagram/อื่นๆ ตามความเป็นส่วนตัวด้วยทางเลือกอื่น",
"config-new-tab": "เปิดลิงก์ในแท็บใหม่",
"config-images": "ค้นหารูปภาพขนาดเต็ม",
"config-images-help": "(ตัวอย่าง) เพิ่มตัวเลือก 'ดูภาพ' ในการค้นหารูปภาพบนเดสก์ท็อป ซึ่งจะทำให้ภาพขนาดย่อมีความละเอียดต่ำ",
@ -1110,14 +928,7 @@
"videos": "วิดีโอ",
"news": "ข่าว",
"books": "หนังสือ",
"anon-view": "มุมมองที่ไม่ระบุตัวตน",
"": "--",
"qdr:h": "ชั่วโมงที่ผ่านมา",
"qdr:d": "24 ชั่วโมงที่ผ่านมา",
"qdr:w": "สัปดาห์ที่ผ่านมา",
"qdr:m": "เดือนที่ผ่านมา",
"qdr:y": "ปีที่ผ่านมา",
"config-time-period": "ระยะเวลา"
"anon-view": "มุมมองที่ไม่ระบุตัวตน"
},
"lang_cy": {
"search": "Chwiliwch",
@ -1139,7 +950,7 @@
"config-dark": "Modd Tywyll",
"config-safe": "Chwilio'n Ddiogel",
"config-alts": "Disodli Cysylltau Cyfryngau Cymdeithasol",
"config-alts-help": "Yn Amnewid Cysylltau Twitter/YouTube/etc gyda Gwefanau Preifatrwydd.",
"config-alts-help": "Yn Amnewid Cysylltau Twitter/YouTube/Instagram/etc gyda Gwefanau Preifatrwydd.",
"config-new-tab": "Agor Cysylltau mewn Tab Newydd",
"config-images": "Chwiliad Delwedd Maint Llawn",
"config-images-help": "(Arbrofol) Yn dangos y 'Gweld Delwedd' opsiwn i chwiliadau delweddau bwrdd gwaith. Mae hyn yn achosi delwedd o ansawdd is.",
@ -1166,125 +977,6 @@
"videos": "Fideos",
"news": "Newyddion",
"books": "Llyfrau",
"anon-view": "Golwg Anhysbys",
"": "--",
"qdr:h": "Yr awr ddiwethaf",
"qdr:d": "24 awr diwethaf",
"qdr:w": "Yr wythnos ddiwethaf",
"qdr:m": "Mis diwethaf",
"qdr:y": "Y flwyddyn ddiwethaf",
"config-time-period": "Cyfnod Amser"
},
"lang_az": {
"": "--",
"search": "Axtar",
"config": "Konfiqurasiya",
"config-country": "Ölkə",
"config-lang": "İnterfeys dili",
"config-lang-search": "Axtarış dili",
"config-near": "Yaxın",
"config-near-help": "Şəhər Adı",
"config-block": "Blok",
"config-block-help": "Vergüllə ayrılmış sayt siyahısı",
"config-block-title": "Başlığa görə bloklayın",
"config-block-title-help": "Regex istifadə edin",
"config-block-url": "URL ilə bloklayın",
"config-block-url-help": "Regex istifadə edin",
"config-theme": "Mövzu",
"config-nojs": "Anonim Görünüşdə Javascript-i silin",
"config-anon-view": "Anonim Baxış Linklərini göstərin",
"config-dark": "Qaranlıq rejim",
"config-safe": "Təhlükəsiz axtarış",
"config-alts": "Sosial Media Linklərini dəyişdirin",
"config-alts-help": "Twitter/YouTube/s. linkləri alternativlərə uyğun məxfiliklə əvəz edir.",
"config-new-tab": "Linkləri Yeni Tabda açın",
"config-images": "Tam ölçülü Şəkil Axtarışı",
"config-images-help": "(Eksperimental) Masaüstü şəkil axtarışlarına 'Şəkilə Bax' seçimini əlavə edir. Bu, şəkil nəticəsi miniatürlərinin daha aşağı ayırdetmə keyfiyyətinə səbəb olacaq.",
"config-tor": "Tor-dan istifadə edin",
"config-get-only": "Yalnız GET Sorğuları",
"config-url": "Kök URL",
"config-pref-url": "URL Tərcihləri",
"config-pref-encryption": "Encrypt Tərcihləri",
"config-pref-help": "WHOOGLE_CONFIG_PREFERENCES_KEY tələb edir, əks halda bu nəzərə alınmayacaq.",
"config-css": "Fərdi CSS",
"config-time-period": "Müddət",
"load": "Yüklə",
"apply": "Tətbiq edin",
"save-as": "Fərqli Saxla...",
"github-link": "GitHub-da baxın",
"translate": "tərcümə",
"light": "işıqlı",
"dark": "qaranlıq",
"system": "sistem",
"ratelimit": "Nümunə dərəcəsi məhdudlaşdırılıb",
"continue-search": "Axtarışınızı Farside ilə davam etdirin",
"all": "Hamısı",
"images": "Şəkillər",
"maps": "Xəritələr",
"videos": "Videolar",
"news": "Xəbərlər",
"books": "Kitablar",
"anon-view": "Anonim Baxış",
"qdr:h": "Keçən saat",
"qdr:d": "Keçən 24 saat",
"qdr:w": "Keçən həftə",
"qdr:m": "Keçən ay",
"qdr:y": "Keçən il"
},
"lang_el": {
"": "--",
"search": "Αναζήτηση",
"config": "Ρυθμήσεις",
"config-country": "Χώρα",
"config-lang": "Γλώσσα Περιβάλλοντος",
"config-lang-search": "Γλώσσα Αναζήτησης",
"config-near": "Κοντά",
"config-near-help": "Όνομα Πόλης",
"config-block": "Block",
"config-block-help": "Comma-separated site list",
"config-block-title": "Block by Title",
"config-block-title-help": "Use regex",
"config-block-url": "Block by URL",
"config-block-url-help": "Use regex",
"config-theme": "Θέμα",
"config-nojs": "Αφαίρεση Javascript σε ανώνυμη προβολή",
"config-anon-view": "Show Anonymous View Links",
"config-dark": "Dark Mode",
"config-safe": "Ασφαλής Αναζήτηση",
"config-alts": "Replace Social Media Links",
"config-alts-help": "Replaces Twitter/YouTube/etc links with privacy respecting alternatives.",
"config-new-tab": "Άνοιγμα συνδέσμου σε νέα καρτέλα",
"config-images": "Full Size Image Search",
"config-images-help": "(Experimental) Adds the 'View Image' option to desktop image searches. This will cause image result thumbnails to be lower resolution.",
"config-tor": "Χρήση Tor",
"config-get-only": "GET Requests Only",
"config-url": "Root URL",
"config-pref-url": "Preferences URL",
"config-pref-encryption": "Encrypt Preferences",
"config-pref-help": "Requires WHOOGLE_CONFIG_PREFERENCES_KEY, otherwise this will be ignored.",
"config-css": "Custom CSS",
"config-time-period": "Time Period",
"load": "Load",
"apply": "Apply",
"save-as": "Save As...",
"github-link": "View on GitHub",
"translate": "translate",
"light": "light",
"dark": "dark",
"system": "system",
"ratelimit": "Instance has been ratelimited",
"continue-search": "Continue your search with Farside",
"all": "All",
"images": "Images",
"maps": "Maps",
"videos": "Videos",
"news": "News",
"books": "Books",
"anon-view": "Ανώνυμη Προβολή",
"qdr:h": "Τελευταία ώρα",
"qdr:d": "Τελευταίες 24 ώρες",
"qdr:w": "Τελευταία Βδομάδα",
"qdr:m": "Τελευταίος Μήνας",
"qdr:y": "Τελευταίος Χρόνος"
"anon-view": "Golwg Anhysbys"
}
}

@ -1,260 +0,0 @@
<!--
Calculator widget.
This file should contain all required
CSS, HTML, and JS for it.
-->
<style>
#calc-text {
background: var(--whoogle-dark-page-bg);
padding: 8px;
border-radius: 8px;
text-align: right;
font-family: monospace;
font-size: 16px;
color: var(--whoogle-dark-text);
}
#prev-equation {
text-align: right;
}
.error-border {
border: 1px solid red;
}
#calc-btns {
display: grid;
grid-template-columns: repeat(6, 1fr);
grid-template-rows: repeat(5, 1fr);
gap: 5px;
}
#calc-btns button {
background: #313141;
color: var(--whoogle-dark-text);
border: none;
border-radius: 8px;
padding: 8px;
cursor: pointer;
}
#calc-btns button:hover {
background: #414151;
}
#calc-btns .common {
background: #51516a;
}
#calc-btns .common:hover {
background: #61617a;
}
#calc-btn-0 { grid-row: 5; grid-column: 3; }
#calc-btn-1 { grid-row: 4; grid-column: 3; }
#calc-btn-2 { grid-row: 4; grid-column: 4; }
#calc-btn-3 { grid-row: 4; grid-column: 5; }
#calc-btn-4 { grid-row: 3; grid-column: 3; }
#calc-btn-5 { grid-row: 3; grid-column: 4; }
#calc-btn-6 { grid-row: 3; grid-column: 5; }
#calc-btn-7 { grid-row: 2; grid-column: 3; }
#calc-btn-8 { grid-row: 2; grid-column: 4; }
#calc-btn-9 { grid-row: 2; grid-column: 5; }
#calc-btn-EQ { grid-row: 5; grid-column: 5; }
#calc-btn-PT { grid-row: 5; grid-column: 4; }
#calc-btn-BCK { grid-row: 5; grid-column: 6; }
#calc-btn-ADD { grid-row: 4; grid-column: 6; }
#calc-btn-SUB { grid-row: 3; grid-column: 6; }
#calc-btn-MLT { grid-row: 2; grid-column: 6; }
#calc-btn-DIV { grid-row: 1; grid-column: 6; }
#calc-btn-CLR { grid-row: 1; grid-column: 5; }
#calc-btn-PRC{ grid-row: 1; grid-column: 4; }
#calc-btn-RP { grid-row: 1; grid-column: 3; }
#calc-btn-LP { grid-row: 1; grid-column: 2; }
#calc-btn-ABS { grid-row: 1; grid-column: 1; }
#calc-btn-SIN { grid-row: 2; grid-column: 2; }
#calc-btn-COS { grid-row: 3; grid-column: 2; }
#calc-btn-TAN { grid-row: 4; grid-column: 2; }
#calc-btn-SQR { grid-row: 5; grid-column: 2; }
#calc-btn-EXP { grid-row: 2; grid-column: 1; }
#calc-btn-E { grid-row: 3; grid-column: 1; }
#calc-btn-PI { grid-row: 4; grid-column: 1; }
#calc-btn-LOG { grid-row: 5; grid-column: 1; }
</style>
<p id="prev-equation"></p>
<div id="calculator-widget">
<p id="calc-text">0</p>
<div id="calc-btns">
<button id="calc-btn-0" class="common">0</button>
<button id="calc-btn-1" class="common">1</button>
<button id="calc-btn-2" class="common">2</button>
<button id="calc-btn-3" class="common">3</button>
<button id="calc-btn-4" class="common">4</button>
<button id="calc-btn-5" class="common">5</button>
<button id="calc-btn-6" class="common">6</button>
<button id="calc-btn-7" class="common">7</button>
<button id="calc-btn-8" class="common">8</button>
<button id="calc-btn-9" class="common">9</button>
<button id="calc-btn-EQ" class="common">=</button>
<button id="calc-btn-PT" class="common">.</button>
<button id="calc-btn-BCK"></button>
<button id="calc-btn-ADD">+</button>
<button id="calc-btn-SUB">-</button>
<button id="calc-btn-MLT">x</button>
<button id="calc-btn-DIV">/</button>
<button id="calc-btn-CLR">C</button>
<button id="calc-btn-PRC">%</button>
<button id="calc-btn-RP">)</button>
<button id="calc-btn-LP">(</button>
<button id="calc-btn-ABS">|x|</button>
<button id="calc-btn-SIN">sin</button>
<button id="calc-btn-COS">cos</button>
<button id="calc-btn-TAN">tan</button>
<button id="calc-btn-SQR"></button>
<button id="calc-btn-EXP">^</button>
<button id="calc-btn-E"></button>
<button id="calc-btn-PI">π</button>
<button id="calc-btn-LOG">log</button>
</div>
</div>
<script>
// JS does not have this by default.
// from https://www.freecodecamp.org/news/how-to-factorialize-a-number-in-javascript-9263c89a4b38/
function factorial(num) {
if (num < 0)
return -1;
else if (num === 0)
return 1;
else {
return (num * factorial(num - 1));
}
}
// returns true if the user is currently focused on the calculator widget
function usingCalculator() {
let activeElement = document.activeElement;
while (true) {
if (!activeElement) return false;
if (activeElement.id === "calculator-wrapper") return true;
activeElement = activeElement.parentElement;
}
}
const $ = q => document.querySelectorAll(q);
// key bindings for commonly used buttons
const keybindings = {
"0": "0",
"1": "1",
"2": "2",
"3": "3",
"4": "4",
"5": "5",
"6": "6",
"7": "7",
"8": "8",
"9": "9",
"Enter": "EQ",
".": "PT",
"+": "ADD",
"-": "SUB",
"*": "MLT",
"/": "DIV",
"%": "PRC",
"c": "CLR",
"(": "LP",
")": "RP",
"Backspace": "BCK",
}
window.addEventListener("keydown", event => {
if (!usingCalculator()) return;
if (event.key === "Enter" && document.activeElement.id !== "search-bar")
event.preventDefault();
if (keybindings[event.key])
document.getElementById("calc-btn-" + keybindings[event.key]).click();
})
// calculates the string
const calc = () => {
var mathtext = document.getElementById("calc-text");
var statement = mathtext.innerHTML
// remove empty ()
.replace("()", "")
// special constants
.replace("π", "(Math.PI)")
.replace("ℇ", "(Math.E)")
// turns 3(1+2) into 3*(1+2) (for example)
.replace(/(?<=[0-9\)])(?<=[^+\-x*\/%^])\(/, "x(")
// same except reversed
.replace(/\)(?=[0-9\(])(?=[^+\-x*\/%^])/, ")x")
// replace human friendly x with JS *
.replace("x", "*")
// trig & misc functions
.replace("sin", "Math.sin")
.replace("cos", "Math.cos")
.replace("tan", "Math.tan")
.replace("√", "Math.sqrt")
.replace("^", "**")
.replace("abs", "Math.abs")
.replace("log", "Math.log")
;
// add any missing )s to the end
while(true) if (
(statement.match(/\(/g) || []).length >
(statement.match(/\)/g) || []).length
) statement += ")"; else break;
// evaluate the expression.
console.log("calculating [" + statement + "]");
try {
var result = eval(statement);
document.getElementById("prev-equation").innerHTML = mathtext.innerHTML + " = ";
mathtext.innerHTML = result;
mathtext.classList.remove("error-border");
} catch (e) {
mathtext.classList.add("error-border");
console.error(e);
}
}
const updateCalc = (e) => {
// character(s) recieved from button
var c = event.target.innerHTML;
var mathtext = document.getElementById("calc-text");
if (mathtext.innerHTML === "0") mathtext.innerHTML = "";
// special cases
switch (c) {
case "C":
// Clear
mathtext.innerHTML = "0";
break;
case "⬅":
// Delete
mathtext.innerHTML = mathtext.innerHTML.slice(0, -1);
if (mathtext.innerHTML.length === 0) {
mathtext.innerHTML = "0";
}
break;
case "=":
calc()
break;
case "sin":
case "cos":
case "tan":
case "log":
case "√":
mathtext.innerHTML += `${c}(`;
break;
case "|x|":
mathtext.innerHTML += "abs("
break;
case "+":
case "-":
case "x":
case "/":
case "%":
case "^":
if (mathtext.innerHTML.length === 0) mathtext.innerHTML = "0";
// prevent typing 2 operators in a row
if (mathtext.innerHTML.match(/[+\-x\/%^] $/))
mathtext.innerHTML = mathtext.innerHTML.slice(0, -3);
mathtext.innerHTML += ` ${c} `;
break;
default:
mathtext.innerHTML += c;
}
}
for (let i of $("#calc-btns button")) {
i.addEventListener('click', event => {
updateCalc(event);
})
}
</script>

@ -19,88 +19,22 @@
{{ error_message }}
</p>
<hr>
{% if query and translation %}
<p>
<h4><a class="link" href="https://farside.link">{{ translation['continue-search'] }}</a></h4>
<ul>
<li>
<a href="https://github.com/benbusby/whoogle-search">Whoogle</a>
<ul>
<li>
<a class="link-color" href="{{farside}}/whoogle/search?q={{query}}{{params}}">
{{farside}}/whoogle/search?q={{query}}
</a>
</li>
</ul>
</li>
<li>
<a href="https://github.com/searxng/searxng">SearXNG</a>
<ul>
<li>
<a class="link-color" href="{{farside}}/searxng/search?q={{query}}">
{{farside}}/searxng/search?q={{query}}
</a>
</li>
</ul>
</li>
</ul>
<hr>
<h4>Other options:</h4>
<ul>
<li>
<a href="https://kagi.com">Kagi</a>
<ul>
<li>Requires account</li>
<li>
<a class="link-color" href="https://kagi.com/search?q={{query}}">
kagi.com/search?q={{query}}
</a>
</li>
</ul>
</li>
<li>
<a href="https://duckduckgo.com">DuckDuckGo</a>
<ul>
<li>
<a class="link-color" href="https://duckduckgo.com/search?q={{query}}">
duckduckgo.com/search?q={{query}}
</a>
</li>
</ul>
</li>
<li>
<a href="https://search.brave.com">Brave Search</a>
<ul>
<li>
<a class="link-color" href="https://search.brave.com/search?q={{query}}">
search.brave.com/search?q={{query}}
</a>
</li>
</ul>
</li>
<li>
<a href="https://ecosia.com">Ecosia</a>
<ul>
<li>
<a class="link-color" href="https://ecosia.com/search?q={{query}}">
ecosia.com/search?q={{query}}
</a>
</li>
</ul>
</li>
<li>
<a href="https://google.com">Google</a>
<ul>
<li>
<a class="link-color" href="https://google.com/search?q={{query}}">
google.com/search?q={{query}}
</a>
</li>
</ul>
</li>
</ul>
<hr>
</p>
<p>
{% if blocked is defined %}
<h4><a class="link" href="https://farside.link">{{ translation['continue-search'] }}</a></h4>
Whoogle:
<br>
<a class="link-color" href="{{farside}}/whoogle/search?q={{query}}{{params}}">
{{farside}}/whoogle/search?q={{query}}
</a>
<br><br>
Searx:
<br>
<a class="link-color" href="{{farside}}/searx/search?q={{query}}">
{{farside}}/searx/search?q={{query}}
</a>
<hr>
{% endif %}
</p>
<a class="link" href="home">Return Home</a>
</div>

@ -89,7 +89,6 @@
dir="auto">
<input name="tbm" value="{{ search_type }}" style="display: none">
<input name="country" value="{{ config.country }}" style="display: none;">
<input name="tbs" value="{{ config.tbs }}" style="display: none;">
<input type="submit" style="display: none;">
<div class="sc"></div>
</div>
@ -136,22 +135,6 @@
</option>
{% endfor %}
</select>
<br />
<label for="config-time-period">{{ translation['config-time-period'] }}: </label>
<select name="tbs" id="result-time-period">
{% for time_period in time_periods %}
<option value="{{ time_period.value }}"
{% if (
config.tbs != '' and config.tbs in time_period.value
) or (
config.tbs == '' and time_period.value == '')
%}
selected
{% endif %}>
{{ translation[time_period.value] }}
</option>
{% endfor %}
</select>
</div>
</div>

@ -108,23 +108,6 @@
{% endfor %}
</select>
</div>
<div class="config-div">
<label for="config-time-period">{{ translation['config-time-period'] }}</label>
<select name="tbs" id="config-time-period">
{% for time_period in time_periods %}
<option value="{{ time_period.value }}"
{% if (
config.tbs != '' and config.tbs in time_period.value
) or (
config.tbs == '' and time_period.value == '')
%}
selected
{% endif %}>
{{ translation[time_period.value] }}
</option>
{% endfor %}
</select>
</div>
<div class="config-div config-div-lang">
<label for="config-lang-interface">{{ translation['config-lang'] }}: </label>
<select name="lang_interface" id="config-lang-interface">
@ -243,13 +226,15 @@
{{ translation['config-css'] }}:
</a>
<textarea
name="style_modified"
name="style"
id="config-style"
autocapitalize="off"
autocomplete="off"
spellcheck="false"
autocorrect="off"
value="">{{ config.style_modified.replace('\t', '') }}</textarea>
value="">
{{ config.style.replace('\t', '') }}
</textarea>
</div>
<div class="config-div config-div-pref-url">
<label for="config-pref-encryption">{{ translation['config-pref-encryption'] }}: </label>

@ -17,6 +17,9 @@
{% if search_type %}
<Param name="tbm" value="{{ search_type }}"/>
{% endif %}
{% if preferences %}
<Param name="preferences" value="{{ preferences }}"/>
{% endif %}
</Url>
<Url type="application/x-suggestions+json" {{ request_type|safe }} template="{{ main_url }}/autocomplete">
<Param name="q" value="{searchTerms}"/>

@ -1,56 +1,8 @@
import json
import requests
import urllib.parse as urlparse
import os
import glob
bangs_dict = {}
DDG_BANGS = 'https://duckduckgo.com/bang.js'
def load_all_bangs(ddg_bangs_file: str, ddg_bangs: dict = {}):
"""Loads all the bang files in alphabetical order
Args:
ddg_bangs_file: The str path to the new DDG bangs json file
ddg_bangs: The dict of ddg bangs. If this is empty, it will load the
bangs from the file
Returns:
None
"""
global bangs_dict
ddg_bangs_file = os.path.normpath(ddg_bangs_file)
if (bangs_dict and not ddg_bangs) or os.path.getsize(ddg_bangs_file) <= 4:
return
bangs = {}
bangs_dir = os.path.dirname(ddg_bangs_file)
bang_files = glob.glob(os.path.join(bangs_dir, '*.json'))
# Normalize the paths
bang_files = [os.path.normpath(f) for f in bang_files]
# Move the ddg bangs file to the beginning
bang_files = sorted([f for f in bang_files if f != ddg_bangs_file])
if ddg_bangs:
bangs |= ddg_bangs
else:
bang_files.insert(0, ddg_bangs_file)
for i, bang_file in enumerate(bang_files):
try:
bangs |= json.load(open(bang_file))
except json.decoder.JSONDecodeError:
# Ignore decoding error only for the ddg bangs file, since this can
# occur if file is still being written
if i != 0:
raise
bangs_dict = dict(sorted(bangs.items()))
DDG_BANGS = 'https://duckduckgo.com/bang.v255.js'
def gen_bangs_json(bangs_file: str) -> None:
@ -85,35 +37,22 @@ def gen_bangs_json(bangs_file: str) -> None:
json.dump(bangs_data, open(bangs_file, 'w'))
print('* Finished creating ddg bangs json')
load_all_bangs(bangs_file, bangs_data)
def suggest_bang(query: str) -> list[str]:
"""Suggests bangs for a user's query
Args:
query: The search query
Returns:
list[str]: A list of bang suggestions
"""
global bangs_dict
return [bangs_dict[_]['suggestion'] for _ in bangs_dict if _.startswith(query)]
def resolve_bang(query: str) -> str:
def resolve_bang(query: str, bangs_dict: dict) -> str:
"""Transform's a user's query to a bang search, if an operator is found
Args:
query: The search query
bangs_dict: The dict of available bang operators, with corresponding
format string search URLs
(i.e. "!w": "https://en.wikipedia.org...?search={}")
Returns:
str: A formatted redirect for a bang search, or an empty str if there
wasn't a match or didn't contain a bang operator
"""
global bangs_dict
#if ! not in query simply return (speed up processing)
if '!' not in query:

@ -1,52 +1,10 @@
import base64
from bs4 import BeautifulSoup as bsoup
from cryptography.fernet import Fernet
from flask import Request
import hashlib
import io
import os
import re
from requests import exceptions, get
from urllib.parse import urlparse
ddg_favicon_site = 'http://icons.duckduckgo.com/ip2'
empty_gif = base64.b64decode(
'R0lGODlhAQABAIAAAP///////yH5BAEKAAEALAAAAAABAAEAAAICTAEAOw==')
placeholder_img = base64.b64decode(
'iVBORw0KGgoAAAANSUhEUgAAABkAAAAZCAYAAADE6YVjAAABF0lEQVRIS8XWPw9EMBQA8Eok' \
'JBKrMFqMBt//GzAYLTZ/VomExPDu6uLiaPteqVynBn0/75W2Vp7nEIYhe6p1XcespmmAd7Is' \
'M+4URcGiKPogvMMvmIS2eN9MOMKbKWgf54SYgI4vKkTuQKJKSJErkKzUSkQHUs0lilAg7GMh' \
'ISoIA/hYMiKCKIA2soeowCWEMkfHtUmrXLcyGYYBfN9HF8djiaglWzNZlgVs21YisoAUaEXG' \
'cQTP86QIFgi7vyLzPIPjOEIEC7ANQv/4aZrAdd0TUtc1i+MYnSsMWjPp+x6CIPgJVlUVS5KE' \
'DKig/+wnVzM4pnzaGeHd+ENlWbI0TbVLJBtw2uMfP63wc9d2kDCWxi5Q27bsBerSJ9afJbeL' \
'AAAAAElFTkSuQmCC'
)
def fetch_favicon(url: str) -> bytes:
"""Fetches a favicon using DuckDuckGo's favicon retriever
Args:
url: The url to fetch the favicon from
Returns:
bytes - the favicon bytes, or a placeholder image if one
was not returned
"""
domain = urlparse(url).netloc
response = get(f'{ddg_favicon_site}/{domain}.ico')
if response.status_code == 200 and len(response.content) > 0:
tmp_mem = io.BytesIO()
tmp_mem.write(response.content)
tmp_mem.seek(0)
return tmp_mem.read()
else:
return placeholder_img
def gen_file_hash(path: str, static_file: str) -> str:
file_contents = open(os.path.join(path, static_file), 'rb').read()
@ -56,8 +14,8 @@ def gen_file_hash(path: str, static_file: str) -> str:
return filename_split[0] + '.' + file_hash + filename_split[-1]
def read_config_bool(var: str, default: bool=False) -> bool:
val = os.getenv(var, '1' if default else '0')
def read_config_bool(var: str) -> bool:
val = os.getenv(var, '0')
# user can specify one of the following values as 'true' inputs (all
# variants with upper case letters will also work):
# ('true', 't', '1', 'yes', 'y')
@ -82,16 +40,8 @@ def get_request_url(url: str) -> str:
def get_proxy_host_url(r: Request, default: str, root=False) -> str:
scheme = r.headers.get('X-Forwarded-Proto', 'https')
http_host = r.headers.get('X-Forwarded-Host')
full_path = r.full_path if not root else ''
if full_path.startswith('/'):
full_path = f'/{full_path}'
if http_host:
prefix = os.environ.get('WHOOGLE_URL_PREFIX', '')
if prefix:
prefix = f'/{re.sub("[^0-9a-zA-Z]+", "", prefix)}'
return f'{scheme}://{http_host}{prefix}{full_path}'
return f'{scheme}://{http_host}{r.full_path if not root else "/"}'
return default
@ -120,20 +70,3 @@ def get_abs_url(url, page_url):
elif url.startswith('./'):
return f'{page_url}{url[2:]}'
return url
def list_to_dict(lst: list) -> dict:
if len(lst) < 2:
return {}
return {lst[i].replace(' ', ''): lst[i+1].replace(' ', '')
for i in range(0, len(lst), 2)}
def encrypt_string(key: bytes, string: str) -> str:
cipher_suite = Fernet(key)
return cipher_suite.encrypt(string.encode()).decode()
def decrypt_string(key: bytes, string: str) -> str:
cipher_suite = Fernet(g.session_key)
return cipher_suite.decrypt(string.encode()).decode()

@ -1,6 +1,5 @@
from app.models.config import Config
from app.models.endpoint import Endpoint
from app.utils.misc import list_to_dict
from bs4 import BeautifulSoup, NavigableString
import copy
from flask import current_app
@ -9,7 +8,6 @@ import os
import urllib.parse as urlparse
from urllib.parse import parse_qs
import re
import warnings
SKIP_ARGS = ['ref_src', 'utm']
SKIP_PREFIX = ['//www.', '//mobile.', '//m.']
@ -27,15 +25,17 @@ BLACKLIST = [
'Reklama', 'Реклама', 'Anunț', '광고', 'annons', 'Annonse', 'Iklan',
'広告', 'Augl.', 'Mainos', 'Advertentie', 'إعلان', 'Գովազդ', 'विज्ञापन',
'Reklam', 'آگهی', 'Reklāma', 'Reklaam', 'Διαφήμιση', 'מודעה', 'Hirdetés',
'Anúncio', 'Quảng cáo','โฆษณา', 'sponsored', 'patrocinado', 'gesponsert'
'Anúncio', 'Quảng cáo','โฆษณา', 'sponsored', 'patrocinado'
]
SITE_ALTS = {
'twitter.com': os.getenv('WHOOGLE_ALT_TW', 'farside.link/nitter'),
'youtube.com': os.getenv('WHOOGLE_ALT_YT', 'farside.link/invidious'),
'instagram.com': os.getenv('WHOOGLE_ALT_IG', 'farside.link/bibliogram/u'),
'reddit.com': os.getenv('WHOOGLE_ALT_RD', 'farside.link/libreddit'),
**dict.fromkeys([
'medium.com',
'.medium.com',
'//medium.com',
'levelup.gitconnected.com'
], os.getenv('WHOOGLE_ALT_MD', 'farside.link/scribe')),
'imgur.com': os.getenv('WHOOGLE_ALT_IMG', 'farside.link/rimgo'),
@ -44,30 +44,6 @@ SITE_ALTS = {
'quora.com': os.getenv('WHOOGLE_ALT_QUORA', 'farside.link/quetre')
}
# Include custom site redirects from WHOOGLE_REDIRECTS
SITE_ALTS.update(list_to_dict(re.split(',|:', os.getenv('WHOOGLE_REDIRECTS', ''))))
def contains_cjko(s: str) -> bool:
"""This function check whether or not a string contains Chinese, Japanese,
or Korean characters. It employs regex and uses the u escape sequence to
match any character in a set of Unicode ranges.
Args:
s (str): string to be checked
Returns:
bool: True if the input s contains the characters and False otherwise
"""
unicode_ranges = ('\u4e00-\u9fff' # Chinese characters
'\u3040-\u309f' # Japanese hiragana
'\u30a0-\u30ff' # Japanese katakana
'\u4e00-\u9faf' # Japanese kanji
'\uac00-\ud7af' # Korean hangul syllables
'\u1100-\u11ff' # Korean hangul jamo
)
return bool(re.search(fr'[{unicode_ranges}]', s))
def bold_search_terms(response: str, query: str) -> BeautifulSoup:
"""Wraps all search terms in bold tags (<b>). If any terms are wrapped
@ -91,18 +67,12 @@ def bold_search_terms(response: str, query: str) -> BeautifulSoup:
# Ensure target word is escaped for regex
target_word = re.escape(target_word)
# Check if the word contains Chinese, Japanese, or Korean characters
if contains_cjko(target_word):
reg_pattern = fr'((?![{{}}<>-]){target_word}(?![{{}}<>-]))'
else:
reg_pattern = fr'\b((?![{{}}<>-]){target_word}(?![{{}}<>-]))\b'
if re.match('.*[@_!#$%^&*()<>?/\|}{~:].*', target_word) or (
element.parent and element.parent.name == 'style'):
return
element.replace_with(BeautifulSoup(
re.sub(reg_pattern,
re.sub(fr'\b((?![{{}}<>-]){target_word}(?![{{}}<>-]))\b',
r'<b>\1</b>',
element,
flags=re.I), 'html.parser')
@ -144,34 +114,19 @@ def get_first_link(soup: BeautifulSoup) -> str:
str: A str link to the first result
"""
first_link = ''
orig_details = []
# Temporarily remove details so we don't grab those links
for details in soup.find_all('details'):
temp_details = soup.new_tag('removed_details')
orig_details.append(details.replace_with(temp_details))
# Replace hrefs with only the intended destination (no "utm" type tags)
for a in soup.find_all('a', href=True):
# Return the first search result URL
if a['href'].startswith('http://') or a['href'].startswith('https://'):
first_link = a['href']
break
if 'url?q=' in a['href']:
return filter_link_args(a['href'])
return ''
# Add the details back
for orig_detail, details in zip(orig_details, soup.find_all('removed_details')):
details.replace_with(orig_detail)
return first_link
def get_site_alt(link: str, site_alts: dict = SITE_ALTS) -> str:
def get_site_alt(link: str) -> str:
"""Returns an alternative to a particular site, if one is configured
Args:
link: A string result URL to check against the site_alts map
site_alts: A map of site alternatives to replace with. defaults to SITE_ALTS
link: A string result URL to check against the SITE_ALTS map
Returns:
str: An updated (or ignored) result link
@ -180,12 +135,7 @@ def get_site_alt(link: str, site_alts: dict = SITE_ALTS) -> str:
# Need to replace full hostname with alternative to encapsulate
# subdomains as well
parsed_link = urlparse.urlparse(link)
# Extract subdomain separately from the domain+tld. The subdomain
# is used for wikiless translations.
split_host = parsed_link.netloc.split('.')
subdomain = split_host[0] if len(split_host) > 2 else ''
hostname = '.'.join(split_host[-2:])
hostname = parsed_link.hostname
# The full scheme + hostname is used when comparing against the list of
# available alternative services, due to how Medium links are constructed.
@ -193,23 +143,22 @@ def get_site_alt(link: str, site_alts: dict = SITE_ALTS) -> str:
# "https://medium.com/..." should match, but "philomedium.com" should not)
hostcomp = f'{parsed_link.scheme}://{hostname}'
for site_key in site_alts.keys():
site_alt = f'{parsed_link.scheme}://{site_key}'
if not hostname or site_alt not in hostcomp or not site_alts[site_key]:
for site_key in SITE_ALTS.keys():
if not hostname or site_key not in hostname or not SITE_ALTS[site_key]:
continue
# Wikipedia -> Wikiless replacements require the subdomain (if it's
# a 2-char language code) to be passed as a URL param to Wikiless
# in order to preserve the language setting.
params = ''
if 'wikipedia' in hostname and len(subdomain) == 2:
hostname = f'{subdomain}.{hostname}'
params = f'?lang={subdomain}'
elif 'medium' in hostname and len(subdomain) > 0:
hostname = f'{subdomain}.{hostname}'
parsed_alt = urlparse.urlparse(site_alts[site_key])
link = link.replace(hostname, site_alts[site_key]) + params
if 'wikipedia' in hostname:
subdomain = hostname.split('.')[0]
if len(subdomain) == 2:
params = f'?lang={subdomain}'
parsed_alt = urlparse.urlparse(SITE_ALTS[site_key])
link = link.replace(hostname, SITE_ALTS[site_key]) + params
# If a scheme is specified in the alternative, this results in a
# replaced link that looks like "https://http://altservice.tld".
# In this case, we can remove the original scheme from the result
@ -218,13 +167,7 @@ def get_site_alt(link: str, site_alts: dict = SITE_ALTS) -> str:
link = '//'.join(link.split('//')[1:])
for prefix in SKIP_PREFIX:
if parsed_alt.scheme:
# If a scheme is specified, remove everything before the
# first occurence of it
link = f'{parsed_alt.scheme}{link.split(parsed_alt.scheme, 1)[-1]}'
else:
# Otherwise, replace the first occurrence of the prefix
link = link.replace(prefix, '//', 1)
link = link.replace(prefix, '//')
break
return link
@ -302,6 +245,44 @@ def append_anon_view(result: BeautifulSoup, config: Config) -> None:
av_link['class'] = 'anon-view'
result.append(av_link)
def add_ip_card(html_soup: BeautifulSoup, ip: str) -> BeautifulSoup:
"""Adds the client's IP address to the search results
if query contains keywords
Args:
html_soup: The parsed search result containing the keywords
ip: ip address of the client
Returns:
BeautifulSoup
"""
main_div = html_soup.select_one('#main')
if main_div:
# HTML IP card tag
ip_tag = html_soup.new_tag('div')
ip_tag['class'] = 'ZINbbc xpd O9g5cc uUPGi'
# For IP Address html tag
ip_address = html_soup.new_tag('div')
ip_address['class'] = 'kCrYT ip-address-div'
ip_address.string = ip
# Text below the IP address
ip_text = html_soup.new_tag('div')
ip_text.string = 'Your public IP address'
ip_text['class'] = 'kCrYT ip-text-div'
# Adding all the above html tags to the IP card
ip_tag.append(ip_address)
ip_tag.append(ip_text)
# Insert the element at the top of the result list
main_div.insert_before(ip_tag)
return html_soup
def check_currency(response: str) -> dict:
"""Check whether the results have currency conversion
@ -435,10 +416,6 @@ def get_tabs_content(tabs: dict,
Returns:
dict: contains the name, the href and if the tab is selected or not
"""
map_query = full_query
if '-site:' in full_query:
block_idx = full_query.index('-site:')
map_query = map_query[:block_idx]
tabs = copy.deepcopy(tabs)
for tab_id, tab_content in tabs.items():
# update name to desired language
@ -454,9 +431,7 @@ def get_tabs_content(tabs: dict,
if preferences:
query = f"{query}&preferences={preferences}"
tab_content['href'] = tab_content['href'].format(
query=query,
map_query=map_query)
tab_content['href'] = tab_content['href'].format(query=query)
# update if selected tab (default all tab is selected)
if tab_content['tbm'] == search_type:

@ -1,6 +1,7 @@
import os
import re
from typing import Any
from app.filter import Filter
from app.request import gen_query
from app.utils.misc import get_proxy_host_url
@ -64,7 +65,6 @@ class Search:
self.config = config
self.session_key = session_key
self.query = ''
self.widget = ''
self.cookies_disabled = cookies_disabled
self.search_type = self.request_params.get(
'tbm') if 'tbm' in self.request_params else ''
@ -102,22 +102,9 @@ class Search:
except InvalidToken:
pass
# Strip '!' for "feeling lucky" queries
if match := re.search("(^|\s)!($|\s)", q):
self.feeling_lucky = True
start, end = match.span()
self.query = " ".join([seg for seg in [q[:start], q[end:]] if seg])
else:
self.feeling_lucky = False
self.query = q
# Check for possible widgets
self.widget = "ip" if re.search("([^a-z0-9]|^)my *[^a-z0-9] *(ip|internet protocol)" +
"($|( *[^a-z0-9] *(((addres|address|adres|" +
"adress)|a)? *$)))", self.query.lower()) else self.widget
self.widget = 'calculator' if re.search(
r"\bcalculator\b|\bcalc\b|\bcalclator\b|\bmath\b",
self.query.lower()) else self.widget
# Strip leading '! ' for "feeling lucky" queries
self.feeling_lucky = q.startswith('! ')
self.query = q[2:] if self.feeling_lucky else q
return self.query
def generate_response(self) -> str:
@ -152,12 +139,10 @@ class Search:
and not g.user_request.mobile)
get_body = g.user_request.send(query=full_query,
force_mobile=view_image,
user_agent=self.user_agent)
force_mobile=view_image)
# Produce cleanable html soup from response
get_body_safed = get_body.text.replace("&lt;","andlt;").replace("&gt;","andgt;")
html_soup = bsoup(get_body_safed, 'html.parser')
html_soup = bsoup(get_body.text, 'html.parser')
# Replace current soup if view_image is active
if view_image:
@ -167,25 +152,32 @@ class Search:
if g.user_request.tor_valid:
html_soup.insert(0, bsoup(TOR_BANNER, 'html.parser'))
formatted_results = content_filter.clean(html_soup)
if self.feeling_lucky:
if lucky_link := get_first_link(formatted_results):
return lucky_link
# Fall through to regular search if unable to find link
self.feeling_lucky = False
# Append user config to all search links, if available
param_str = ''.join('&{}={}'.format(k, v)
for k, v in
self.request_params.to_dict(flat=True).items()
if self.config.is_safe_key(k))
for link in formatted_results.find_all('a', href=True):
link['rel'] = "nofollow noopener noreferrer"
if 'search?' not in link['href'] or link['href'].index(
'search?') > 1:
continue
link['href'] += param_str
return str(formatted_results)
return get_first_link(html_soup)
else:
formatted_results = content_filter.clean(html_soup)
# Append user config to all search links, if available
param_str = ''.join('&{}={}'.format(k, v)
for k, v in
self.request_params.to_dict(flat=True).items()
if self.config.is_safe_key(k))
for link in formatted_results.find_all('a', href=True):
link['rel'] = "nofollow noopener noreferrer"
if 'search?' not in link['href'] or link['href'].index(
'search?') > 1:
continue
link['href'] += param_str
return str(formatted_results)
def check_kw_ip(self) -> re.Match:
"""Checks for keywords related to 'my ip' in the query
Returns:
bool
"""
return re.search("([^a-z0-9]|^)my *[^a-z0-9] *(ip|internet protocol)" +
"($|( *[^a-z0-9] *(((addres|address|adres|" +
"adress)|a)? *$)))", self.query.lower())

@ -1,10 +1,10 @@
from cryptography.fernet import Fernet
from flask import current_app as app
REQUIRED_SESSION_VALUES = ['uuid', 'config', 'key', 'auth']
REQUIRED_SESSION_VALUES = ['uuid', 'config', 'key']
def generate_key() -> bytes:
def generate_user_key() -> bytes:
"""Generates a key for encrypting searches and element URLs
Args:

@ -1,71 +0,0 @@
from pathlib import Path
from bs4 import BeautifulSoup
# root
BASE_DIR = Path(__file__).parent.parent.parent
def add_ip_card(html_soup: BeautifulSoup, ip: str) -> BeautifulSoup:
"""Adds the client's IP address to the search results
if query contains keywords
Args:
html_soup: The parsed search result containing the keywords
ip: ip address of the client
Returns:
BeautifulSoup
"""
main_div = html_soup.select_one('#main')
if main_div:
# HTML IP card tag
ip_tag = html_soup.new_tag('div')
ip_tag['class'] = 'ZINbbc xpd O9g5cc uUPGi'
# For IP Address html tag
ip_address = html_soup.new_tag('div')
ip_address['class'] = 'kCrYT ip-address-div'
ip_address.string = ip
# Text below the IP address
ip_text = html_soup.new_tag('div')
ip_text.string = 'Your public IP address'
ip_text['class'] = 'kCrYT ip-text-div'
# Adding all the above html tags to the IP card
ip_tag.append(ip_address)
ip_tag.append(ip_text)
# Insert the element at the top of the result list
main_div.insert_before(ip_tag)
return html_soup
def add_calculator_card(html_soup: BeautifulSoup) -> BeautifulSoup:
"""Adds the a calculator widget to the search results
if query contains keywords
Args:
html_soup: The parsed search result containing the keywords
Returns:
BeautifulSoup
"""
main_div = html_soup.select_one('#main')
if main_div:
# absolute path
widget_file = open(BASE_DIR / 'app/static/widgets/calculator.html', encoding="utf8")
widget_tag = html_soup.new_tag('div')
widget_tag['class'] = 'ZINbbc xpd O9g5cc uUPGi'
widget_tag['id'] = 'calculator-wrapper'
calculator_text = html_soup.new_tag('div')
calculator_text['class'] = 'kCrYT ip-address-div'
calculator_text.string = 'Calculator'
calculator_widget = html_soup.new_tag('div')
calculator_widget.append(BeautifulSoup(widget_file, 'html.parser'))
calculator_widget['class'] = 'kCrYT ip-text-div'
widget_tag.append(calculator_text)
widget_tag.append(calculator_widget)
main_div.insert_before(widget_tag)
widget_file.close()
return html_soup

@ -3,7 +3,7 @@ name: whoogle
description: A self hosted search engine on Kubernetes
type: application
version: 0.1.0
appVersion: 0.8.4
appVersion: 0.8.0
icon: https://github.com/benbusby/whoogle-search/raw/main/app/static/img/favicon/favicon-96x96.png

@ -52,20 +52,10 @@ spec:
httpGet:
path: /
port: http
{{- if and .Values.conf.WHOOGLE_USER .Values.conf.WHOOGLE_PASS }}
httpHeaders:
- name: Authorization
value: Basic {{ b64enc (printf "%s:%s" .Values.conf.WHOOGLE_USER .Values.conf.WHOOGLE_PASS) }}
{{- end }}
readinessProbe:
httpGet:
path: /
port: http
{{- if and .Values.conf.WHOOGLE_USER .Values.conf.WHOOGLE_PASS }}
httpHeaders:
- name: Authorization
value: Basic {{ b64enc (printf "%s:%s" .Values.conf.WHOOGLE_USER .Values.conf.WHOOGLE_PASS) }}
{{- end }}
resources:
{{- toYaml .Values.resources | nindent 12 }}
{{- with .Values.nodeSelector }}

@ -36,6 +36,7 @@ conf: {}
# HTTPS_ONLY: "" # Enforce HTTPS. (See https://github.com/benbusby/whoogle-search#https-enforcement)
# WHOOGLE_ALT_TW: "" # The twitter.com alternative to use when site alternatives are enabled in the config.
# WHOOGLE_ALT_YT: "" # The youtube.com alternative to use when site alternatives are enabled in the config.
# WHOOGLE_ALT_IG: "" # The instagram.com alternative to use when site alternatives are enabled in the config.
# WHOOGLE_ALT_RD: "" # The reddit.com alternative to use when site alternatives are enabled in the config.
# WHOOGLE_ALT_TL: "" # The Google Translate alternative to use. This is used for all "translate ____" searches.
# WHOOGLE_ALT_MD: "" # The medium.com alternative to use when site alternatives are enabled in the config.
@ -62,7 +63,7 @@ conf: {}
# WHOOGLE_CONFIG_URL: "" # The root url of the instance (https://<your url>/)
# WHOOGLE_CONFIG_STYLE: "" # The custom CSS to use for styling (should be single line)
# WHOOGLE_CONFIG_PREFERENCES_ENCRYPTED: "" # Encrypt preferences token, requires key
# WHOOGLE_CONFIG_PREFERENCES_KEY: "" # Key to encrypt preferences in URL (REQUIRED to show url)
# WHOOGLE_CONFIG_PREFERENCES_KEY: "" # Key to encrypt preferences in URL (REQUIRED to show url)
podAnnotations: {}
podSecurityContext: {}

@ -1,24 +1,11 @@
https://search.albony.xyz
https://search.garudalinux.org
https://search.dr460nf1r3.org
https://search.nezumi.party
https://s.tokhmi.xyz
https://search.sethforprivacy.com
https://whoogle.dcs0.hu
https://whoogle.lunar.icu
https://whoogle.esmailelbob.xyz
https://gowogle.voring.me
https://whoogle.privacydev.net
https://whoogle.hostux.net
https://wg.vern.cc
https://whoogle.hxvy0.gq
https://whoogle.ungovernable.men
https://whoogle2.ungovernable.men
https://whoogle3.ungovernable.men
https://wgl.frail.duckdns.org
https://whoogle.no-logs.com
https://whoogle.ftw.lol
https://whoogle-search--replitcomreside.repl.co
https://search.notrustverify.ch
https://whoogle.datura.network
https://whoogle.yepserver.xyz
https://search.snine.nl
https://www.indexia.gq

@ -1,5 +0,0 @@
import subprocess
# A plague upon Replit and all who have built it
replit_cmd = "killall -q python3 > /dev/null 2>&1; pip install -r requirements.txt && ./run"
subprocess.run(replit_cmd, shell=True)

@ -1,27 +1,10 @@
#!/bin/sh
FF_STRING="FascistFirewall 1"
if [ "$WHOOGLE_TOR_SERVICE" == "0" ]; then
echo "Skipping Tor startup..."
exit 0
fi
if [ "$WHOOGLE_TOR_FF" == "1" ]; then
if (grep -q "$FF_STRING" /etc/tor/torrc); then
echo "FascistFirewall feature already enabled."
else
echo "$FF_STRING" >> /etc/tor/torrc
if [ "$?" -eq 0 ]; then
echo "FascistFirewall added to /etc/tor/torrc"
else
echo "ERROR: Unable to modify /etc/tor/torrc with $FF_STRING."
exit 1
fi
fi
fi
if [ "$(whoami)" != "root" ]; then
tor -f /etc/tor/torrc
else

@ -7,6 +7,3 @@ ExtORPortCookieAuthFileGroupReadable 1
CacheDirectoryGroupReadable 1
CookieAuthFile /var/lib/tor/control_auth_cookie
Log debug-notice file /dev/null
# UseBridges 1
# ClientTransportPlugin obfs4 exec /usr/bin/obfs4proxy
# Bridge obfs4 ip and so on

@ -1,67 +0,0 @@
import json
import pathlib
import requests
lingva = 'https://lingva.ml/api/v1/en'
def format_lang(lang: str) -> str:
# Chinese (traditional and simplified) require
# a different format for lingva translations
if 'zh-' in lang:
if lang == 'zh-TW':
return 'zh_HANT'
return 'zh'
# Strip lang prefix to leave only the actual
# language code (i.e. 'en', 'fr', etc)
return lang.replace('lang_', '')
def translate(v: str, lang: str) -> str:
# Strip lang prefix to leave only the actual
#language code (i.e. "es", "fr", etc)
lang = format_lang(lang)
lingva_req = f'{lingva}/{lang}/{v}'
response = requests.get(lingva_req).json()
if 'translation' in response:
return response['translation']
return ''
if __name__ == '__main__':
file_path = pathlib.Path(__file__).parent.resolve()
tl_path = 'app/static/settings/translations.json'
with open(f'{file_path}/../{tl_path}', 'r+', encoding='utf-8') as tl_file:
tl_data = json.load(tl_file)
# If there are any english translations that don't
# exist for other languages, extract them and translate
# them now
en_tl = tl_data['lang_en']
for k, v in en_tl.items():
for lang in tl_data:
if lang == 'lang_en' or k in tl_data[lang]:
continue
translation = ''
if len(k) == 0:
# Special case for placeholder text that gets used
# for translations without any key present
translation = v
else:
# Translate the string using lingva
translation = translate(v, lang)
if len(translation) == 0:
print(f'! Unable to translate {lang}[{k}]')
continue
print(f'{lang}[{k}] = {translation}')
tl_data[lang][k] = translation
# Write out updated translations json
print(json.dumps(tl_data, indent=4, ensure_ascii=False))

@ -1,3 +0,0 @@
[build-system]
requires = ["setuptools", "wheel"]
build-backend = "setuptools.build_meta"

@ -1,37 +1,34 @@
attrs==22.2.0
beautifulsoup4==4.11.2
attrs==19.3.0
beautifulsoup4==4.10.0
brotli==1.0.9
cachelib==0.10.2
certifi==2023.7.22
cffi==1.15.1
chardet==5.1.0
click==8.1.3
cryptography==3.3.2; platform_machine == 'armv7l'
cryptography==42.0.4; platform_machine != 'armv7l'
cssutils==2.6.0
cachelib==0.4.1
certifi==2020.4.5.1
cffi==1.15.0
chardet==3.0.4
click==8.0.3
cryptography==3.3.2
cssutils==2.4.0
defusedxml==0.7.1
Flask==2.3.2
idna==3.4
itsdangerous==2.1.2
Jinja2==3.1.3
MarkupSafe==2.1.2
more-itertools==9.0.0
packaging==23.0
pluggy==1.0.0
pycodestyle==2.10.0
Flask==1.1.1
idna==2.9
itsdangerous==1.1.0
Jinja2==2.11.3
MarkupSafe==1.1.1
more-itertools==8.3.0
packaging==20.4
pluggy==0.13.1
pycodestyle==2.6.0
pycparser==2.21
pyOpenSSL==19.1.0; platform_machine == 'armv7l'
pyOpenSSL==24.0.0; platform_machine != 'armv7l'
pyparsing==3.0.9
pyOpenSSL==19.1.0
pyparsing==2.4.7
PySocks==1.7.1
pytest==7.2.1
python-dateutil==2.8.2
requests==2.31.0
soupsieve==2.4
stem==1.8.1
urllib3==1.26.18
validators==0.22.0
pytest==7.2.0
python-dateutil==2.8.1
requests==2.25.1
soupsieve==1.9.5
stem==1.8.0
urllib3==1.26.5
waitress==2.1.2
wcwidth==0.2.6
Werkzeug==3.0.1
python-dotenv==0.21.1
wcwidth==0.1.9
Werkzeug==0.16.0
python-dotenv==0.16.0

1
run

@ -29,7 +29,6 @@ else
python3 -um app \
--unix-socket "$UNIX_SOCKET"
else
echo "Running on http://${ADDRESS:-0.0.0.0}:${PORT:-"${EXPOSE_PORT:-5000}"}"
python3 -um app \
--host "${ADDRESS:-0.0.0.0}" \
--port "${PORT:-"${EXPOSE_PORT:-5000}"}"

@ -1,6 +1,5 @@
[metadata]
name = whoogle-search
version = attr: app.version.__version__
url = https://github.com/benbusby/whoogle-search
description = Self-hosted, ad-free, privacy-respecting metasearch engine
long_description = file: README.md
@ -19,15 +18,14 @@ packages = find:
include_package_data = True
install_requires=
beautifulsoup4
brotli
cssutils
cryptography
defusedxml
Flask
Flask-Session
python-dotenv
requests
stem
validators
waitress
[options.extras_require]
@ -36,10 +34,6 @@ test =
python-dateutil
dev = pycodestyle
[options.packages.find]
exclude =
test*
[options.entry_points]
console_scripts =
whoogle-search = app.routes:run_app

@ -1,7 +1,8 @@
import os
import setuptools
optional_dev_tag = ''
if os.getenv('DEV_BUILD'):
optional_dev_tag = '.dev' + os.getenv('DEV_BUILD')
__version__ = '0.8.4' + optional_dev_tag
setuptools.setup(version='0.8.0' + optional_dev_tag)

@ -1,5 +1,5 @@
from app import app
from app.utils.session import generate_key
from app.utils.session import generate_user_key
import pytest
import random
@ -18,7 +18,6 @@ def client():
with app.test_client() as client:
with client.session_transaction() as session:
session['uuid'] = 'test'
session['key'] = app.enc_key
session['key'] = generate_user_key()
session['config'] = {}
session['auth'] = False
yield client

@ -2,7 +2,7 @@ from cryptography.fernet import Fernet
from app import app
from app.models.endpoint import Endpoint
from app.utils.session import generate_key, valid_user_session
from app.utils.session import generate_user_key, valid_user_session
JAPAN_PREFS = 'uG-gGIJwHdqxl6DrS3mnu_511HlQcRpxYlG03Xs-' \
@ -20,9 +20,9 @@ JAPAN_PREFS = 'uG-gGIJwHdqxl6DrS3mnu_511HlQcRpxYlG03Xs-' \
def test_generate_user_keys():
key = generate_key()
key = generate_user_key()
assert Fernet(key)
assert generate_key() != key
assert generate_user_key() != key
def test_valid_session(client):

@ -2,8 +2,7 @@ from bs4 import BeautifulSoup
from app.filter import Filter
from app.models.config import Config
from app.models.endpoint import Endpoint
from app.utils import results
from app.utils.session import generate_key
from app.utils.session import generate_user_key
from datetime import datetime
from dateutil.parser import ParserError, parse
from urllib.parse import urlparse
@ -12,7 +11,7 @@ from test.conftest import demo_config
def get_search_results(data):
secret_key = generate_key()
secret_key = generate_user_key()
soup = Filter(user_key=secret_key, config=Config(**demo_config)).clean(
BeautifulSoup(data, 'html.parser'))
@ -45,11 +44,17 @@ def test_get_results(client):
def test_post_results(client):
rv = client.post(f'/{Endpoint.search}', data=dict(q='test'))
assert rv._status_code == 302
assert rv._status_code == 200
# Depending on the search, there can be more
# than 10 result divs
results = get_search_results(rv.data)
assert len(results) >= 10
assert len(results) <= 15
def test_translate_search(client):
rv = client.get(f'/{Endpoint.search}?q=translate hola')
rv = client.post(f'/{Endpoint.search}', data=dict(q='translate hola'))
assert rv._status_code == 200
# Pretty weak test, but better than nothing
@ -59,7 +64,7 @@ def test_translate_search(client):
def test_block_results(client):
rv = client.get(f'/{Endpoint.search}?q=pinterest')
rv = client.post(f'/{Endpoint.search}', data=dict(q='pinterest'))
assert rv._status_code == 200
has_pinterest = False
@ -74,7 +79,7 @@ def test_block_results(client):
rv = client.post(f'/{Endpoint.config}', data=demo_config)
assert rv._status_code == 302
rv = client.get(f'/{Endpoint.search}?q=pinterest')
rv = client.post(f'/{Endpoint.search}', data=dict(q='pinterest'))
assert rv._status_code == 200
for link in BeautifulSoup(rv.data, 'html.parser').find_all('a', href=True):
@ -85,7 +90,7 @@ def test_block_results(client):
def test_view_my_ip(client):
rv = client.get(f'/{Endpoint.search}?q=my ip address')
rv = client.post(f'/{Endpoint.search}', data=dict(q='my ip address'))
assert rv._status_code == 200
# Pretty weak test, but better than nothing
@ -96,13 +101,13 @@ def test_view_my_ip(client):
def test_recent_results(client):
times = {
'tbs=qdr:y': 365,
'tbs=qdr:m': 31,
'tbs=qdr:w': 7
'past year': 365,
'past month': 31,
'past week': 7
}
for time, num_days in times.items():
rv = client.get(f'/{Endpoint.search}?q=test&' + time)
rv = client.post(f'/{Endpoint.search}', data=dict(q='test :' + time))
result_divs = get_search_results(rv.data)
current_date = datetime.now()
@ -127,7 +132,7 @@ def test_leading_slash_search(client):
assert rv._status_code == 200
soup = Filter(
user_key=generate_key(),
user_key=generate_user_key(),
config=Config(**demo_config),
query=q
).clean(BeautifulSoup(rv.data, 'html.parser'))
@ -137,22 +142,3 @@ def test_leading_slash_search(client):
continue
assert link['href'].startswith(f'{Endpoint.search}')
def test_site_alt_prefix_skip():
# Ensure prefixes are skipped correctly for site alts
# default silte_alts (farside.link)
assert results.get_site_alt(link = 'https://www.reddit.com') == 'https://farside.link/libreddit'
assert results.get_site_alt(link = 'https://www.twitter.com') == 'https://farside.link/nitter'
assert results.get_site_alt(link = 'https://www.youtube.com') == 'https://farside.link/invidious'
test_site_alts = {
'reddit.com': 'reddit.endswithmobile.domain',
'twitter.com': 'https://twitter.endswithm.domain',
'youtube.com': 'http://yt.endswithwww.domain',
}
# Domains with part of SKIP_PREFIX in them
assert results.get_site_alt(link = 'https://www.reddit.com', site_alts = test_site_alts) == 'https://reddit.endswithmobile.domain'
assert results.get_site_alt(link = 'https://www.twitter.com', site_alts = test_site_alts) == 'https://twitter.endswithm.domain'
assert results.get_site_alt(link = 'https://www.youtube.com', site_alts = test_site_alts) == 'http://yt.endswithwww.domain'

@ -17,15 +17,8 @@ def test_search(client):
def test_feeling_lucky(client):
# Bang at beginning of query
rv = client.get(f'/{Endpoint.search}?q=!%20wikipedia')
rv = client.get(f'/{Endpoint.search}?q=!%20test')
assert rv._status_code == 303
assert rv.headers.get('Location').startswith('https://www.wikipedia.org')
# Move bang to end of query
rv = client.get(f'/{Endpoint.search}?q=github%20!')
assert rv._status_code == 303
assert rv.headers.get('Location').startswith('https://github.com')
def test_ddg_bang(client):
@ -55,13 +48,6 @@ def test_ddg_bang(client):
assert rv.headers.get('Location').startswith('https://github.com')
def test_custom_bang(client):
# Bang at beginning of query
rv = client.get(f'/{Endpoint.search}?q=!i%20whoogle')
assert rv._status_code == 302
assert rv.headers.get('Location').startswith('search?q=')
def test_config(client):
rv = client.post(f'/{Endpoint.config}', data=demo_config)
assert rv._status_code == 302

Loading…
Cancel
Save