Compare commits

..

No commits in common. 'master' and '0.6.17' have entirely different histories.

1
.gitattributes vendored

@ -1,5 +1,4 @@
constants.py ident export-subst
/test export-ignore
/library export-ignore
cps/static/css/libs/* linguist-vendored
cps/static/js/libs/* linguist-vendored

@ -6,23 +6,12 @@ labels: ''
assignees: ''
---
<!-- Please have a look at our [Contributing Guidelines](https://github.com/janeczku/calibre-web/blob/master/CONTRIBUTING.md) -->
## Short Notice from the maintainer
After 6 years of more or less intensive programming on Calibre-Web, I need a break.
The last few months, maintaining Calibre-Web has felt more like work than a hobby. I felt pressured and teased by people to solve "their" problems and merge PRs for "their" Calibre-Web.
I have turned off all notifications from Github/Discord and will now concentrate undisturbed on the development of “my” Calibre-Web over the next few weeks/months.
I will look into the issues and maybe also the PRs from time to time, but don't expect a quick response from me.
Please also have a look at our [Contributing Guidelines](https://github.com/janeczku/calibre-web/blob/master/CONTRIBUTING.md)
**Describe the bug/problem**
**Describe the bug/problem**
A clear and concise description of what the bug is. If you are asking for support, please check our [Wiki](https://github.com/janeczku/calibre-web/wiki) if your question is already answered there.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
@ -30,19 +19,15 @@ Steps to reproduce the behavior:
4. See error
**Logfile**
Add content of calibre-web.log file or the relevant error, try to reproduce your problem with "debug" log-level to get more output.
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Environment (please complete the following information):**
- OS: [e.g. Windows 10/Raspberry Pi OS]
- Python version: [e.g. python2.7]
- Calibre-Web version: [e.g. 0.6.8 or 087c4c59 (git rev-parse --short HEAD)]:
@ -52,4 +37,3 @@ If applicable, add screenshots to help explain your problem.
**Additional context**
Add any other context about the problem here. [e.g. access via reverse proxy, database background sync, special database location]

@ -7,14 +7,7 @@ assignees: ''
---
# Short Notice from the maintainer
After 6 years of more or less intensive programming on Calibre-Web, I need a break.
The last few months, maintaining Calibre-Web has felt more like work than a hobby. I felt pressured and teased by people to solve "their" problems and merge PRs for "their" Calibre-Web.
I have turned off all notifications from Github/Discord and will now concentrate undisturbed on the development of “my” Calibre-Web over the next few weeks/months.
I will look into the issues and maybe also the PRs from time to time, but don't expect a quick response from me.
Please have a look at our [Contributing Guidelines](https://github.com/janeczku/calibre-web/blob/master/CONTRIBUTING.md)
<!-- Please have a look at our [Contributing Guidelines](https://github.com/janeczku/calibre-web/blob/master/CONTRIBUTING.md) -->
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

3
.gitignore vendored

@ -23,15 +23,12 @@ vendor/
# calibre-web
*.db
*.log
cps/cache
.idea/
*.bak
*.log.*
.key
settings.yaml
gdrive_credentials
client_secrets.json
gmail.json
/.key

@ -26,9 +26,9 @@ The Calibre-Web documentation is hosted in the Github [Wiki](https://github.com/
Do not open up a GitHub issue if the bug is a **security vulnerability** in Calibre-Web. Instead, please write an email to "ozzie.fernandez.isaacs@googlemail.com".
Ensure the **bug was not already reported** by searching on GitHub under [Issues](https://github.com/janeczku/calibre-web/issues). Please also check if a solution for your problem can be found in the [wiki](https://github.com/janeczku/calibre-web/wiki).
Ensure the ***bug was not already reported** by searching on GitHub under [Issues](https://github.com/janeczku/calibre-web/issues). Please also check if a solution for your problem can be found in the [wiki](https://github.com/janeczku/calibre-web/wiki).
If you're unable to find an **open issue** addressing the problem, open a [new one](https://github.com/janeczku/calibre-web/issues/new/choose). Be sure to include a **title** and **clear description**, as much relevant information as possible, the **issue form** helps you providing the right information. Deleting the form and just pasting the stack trace doesn't speed up fixing the problem. If your issue could be resolved, consider closing the issue.
If you're unable to find an **open issue** addressing the problem, open a [new one](https://github.com/janeczku/calibre-web/issues/new?assignees=&labels=&template=bug_report.md&title=). Be sure to include a **title** and **clear description**, as much relevant information as possible, the **issue form** helps you providing the right information. Deleting the form and just pasting the stack trace doesn't speed up fixing the problem. If your issue could be resolved, consider closing the issue.
### **Feature Request**

@ -1,125 +1,97 @@
# Short Notice from the maintainer
# About
After 6 years of more or less intensive programming on Calibre-Web, I need a break.
The last few months, maintaining Calibre-Web has felt more like work than a hobby. I felt pressured and teased by people to solve "their" problems and merge PRs for "their" Calibre-Web.
I have turned off all notifications from Github/Discord and will now concentrate undisturbed on the development of “my” Calibre-Web over the next few weeks/months.
I will look into the issues and maybe also the PRs from time to time, but don't expect a quick response from me.
Calibre-Web is a web app providing a clean interface for browsing, reading and downloading eBooks using an existing [Calibre](https://calibre-ebook.com) database.
# Calibre-Web
Calibre-Web is a web app that offers a clean and intuitive interface for browsing, reading, and downloading eBooks using a valid [Calibre](https://calibre-ebook.com) database.
[![License](https://img.shields.io/github/license/janeczku/calibre-web?style=flat-square)](https://github.com/janeczku/calibre-web/blob/master/LICENSE)
![Commit Activity](https://img.shields.io/github/commit-activity/w/janeczku/calibre-web?logo=github&style=flat-square&label=commits)
[![All Releases](https://img.shields.io/github/downloads/janeczku/calibre-web/total?logo=github&style=flat-square)](https://github.com/janeczku/calibre-web/releases)
[![GitHub License](https://img.shields.io/github/license/janeczku/calibre-web?style=flat-square)](https://github.com/janeczku/calibre-web/blob/master/LICENSE)
[![GitHub commit activity](https://img.shields.io/github/commit-activity/w/janeczku/calibre-web?logo=github&style=flat-square&label=commits)]()
[![GitHub all releases](https://img.shields.io/github/downloads/janeczku/calibre-web/total?logo=github&style=flat-square)](https://github.com/janeczku/calibre-web/releases)
[![PyPI](https://img.shields.io/pypi/v/calibreweb?logo=pypi&logoColor=fff&style=flat-square)](https://pypi.org/project/calibreweb/)
[![PyPI - Downloads](https://img.shields.io/pypi/dm/calibreweb?logo=pypi&logoColor=fff&style=flat-square)](https://pypi.org/project/calibreweb/)
[![Discord](https://img.shields.io/discord/838810113564344381?label=Discord&logo=discord&style=flat-square)](https://discord.gg/h2VsJ2NEfB)
<details>
<summary><strong>Table of Contents</strong> (click to expand)</summary>
1. [About](#calibre-web)
2. [Features](#features)
3. [Installation](#installation)
- [Installation via pip (recommended)](#installation-via-pip-recommended)
- [Quick start](#quick-start)
- [Requirements](#requirements)
4. [Docker Images](#docker-images)
5. [Contributor Recognition](#contributor-recognition)
6. [Contact](#contact)
7. [Contributing to Calibre-Web](#contributing-to-calibre-web)
</details>
*This software is a fork of [library](https://github.com/mutschler/calibreserver) and licensed under the GPL v3 License.*
![Main screen](https://github.com/janeczku/calibre-web/wiki/images/main_screen.png)
## Features
- Modern and responsive Bootstrap 3 HTML5 interface
- Full graphical setup
- Comprehensive user management with fine-grained per-user permissions
- Bootstrap 3 HTML5 interface
- full graphical setup
- User management with fine-grained per-user permissions
- Admin interface
- Multilingual user interface supporting 20+ languages ([supported languages](https://github.com/janeczku/calibre-web/wiki/Translation-Status))
- OPDS feed for eBook reader apps
- Advanced search and filtering options
- Custom book collection (shelves) creation
- eBook metadata editing and deletion support
- Metadata download from various sources (extensible via plugins)
- eBook conversion through Calibre binaries
- eBook download restriction to logged-in users
- Public user registration support
- Send eBooks to E-Readers with a single click
- Sync Kobo devices with your Calibre library
- In-browser eBook reading support for multiple formats
- Upload new books in various formats, including audio formats
- Calibre Custom Columns support
- Content hiding based on categories and Custom Column content per user
- User Interface in brazilian, czech, dutch, english, finnish, french, german, greek, hungarian, italian, japanese, khmer, korean, polish, russian, simplified and traditional chinese, spanish, swedish, turkish, ukrainian
- OPDS feed for eBook reader apps
- Filter and search by titles, authors, tags, series and language
- Create a custom book collection (shelves)
- Support for editing eBook metadata and deleting eBooks from Calibre library
- Support for converting eBooks through Calibre binaries
- Restrict eBook download to logged-in users
- Support for public user registration
- Send eBooks to Kindle devices with the click of a button
- Sync your Kobo devices through Calibre-Web with your Calibre library
- Support for reading eBooks directly in the browser (.txt, .epub, .pdf, .cbr, .cbt, .cbz, .djvu)
- Upload new books in many formats, including audio formats (.mp3, .m4a, .m4b)
- Support for Calibre Custom Columns
- Ability to hide content based on categories and Custom Column content per user
- Self-update capability
- "Magic Link" login for easy access on eReaders
- LDAP, Google/GitHub OAuth, and proxy authentication support
- "Magic Link" login to make it easy to log on eReaders
- Login via LDAP, google/github oauth and via proxy authentication
## Installation
#### Installation via pip (recommended)
1. Create a virtual environment for Calibre-Web to avoid conflicts with existing Python dependencies
2. Install Calibre-Web via pip: `pip install calibreweb` (or `pip3` depending on your OS/distro)
3. Install optional features via pip as needed, see [this page](https://github.com/janeczku/calibre-web/wiki/Dependencies-in-Calibre-Web-Linux-and-Windows) for details
4. Start Calibre-Web by typing `cps`
1. To avoid problems with already installed python dependencies, it's recommended to create a virtual environment for Calibre-Web
2. Install Calibre-Web via pip with the command `pip install calibreweb` (Depending on your OS and or distro the command could also be `pip3`).
3. Optional features can also be installed via pip, please refer to [this page](https://github.com/janeczku/calibre-web/wiki/Dependencies-in-Calibre-Web-Linux-Windows) for details
4. Calibre-Web can be started afterwards by typing `cps` or `python3 -m cps`
*Note: Raspberry Pi OS users may encounter issues during installation. If so, please update pip (`./venv/bin/python3 -m pip install --upgrade pip`) and/or install cargo (`sudo apt install cargo`) before retrying the installation.*
In the Wiki there are also examples for a [manual installation](https://github.com/janeczku/calibre-web/wiki/Manual-installation) and for installation on [Linux Mint](https://github.com/janeczku/calibre-web/wiki/How-To:Install-Calibre-Web-in-Linux-Mint-19-or-20)
Refer to the Wiki for additional installation examples: [manual installation](https://github.com/janeczku/calibre-web/wiki/Manual-installation), [Linux Mint](https://github.com/janeczku/calibre-web/wiki/How-To:-Install-Calibre-Web-in-Linux-Mint-19-or-20), [Cloud Provider](https://github.com/janeczku/calibre-web/wiki/How-To:-Install-Calibre-Web-on-a-Cloud-Provider).
## Quick start
## Quick Start
Point your browser to `http://localhost:8083` or `http://localhost:8083/opds` for the OPDS catalog
Set `Location of Calibre database` to the path of the folder where your Calibre library (metadata.db) lives, push "submit" button\
Optionally a Google Drive can be used to host the calibre library [-> Using Google Drive integration](https://github.com/janeczku/calibre-web/wiki/Configuration#using-google-drive-integration)
Go to Login page
1. Open your browser and navigate to `http://localhost:8083` or `http://localhost:8083/opds` for the OPDS catalog
2. Log in with the default admin credentials
3. If you don't have a Calibre database, you can use [this database](https://github.com/janeczku/calibre-web/raw/master/library/metadata.db) (move it out of the Calibre-Web folder to prevent overwriting during updates)
4. Set `Location of Calibre database` to the path of the folder containing your Calibre library (metadata.db) and click "Save"
5. Optionally, use Google Drive to host your Calibre library by following the [Google Drive integration guide](https://github.com/janeczku/calibre-web/wiki/G-Drive-Setup#using-google-drive-integration)
6. Configure your Calibre-Web instance via the admin page, referring to the [Basic Configuration](https://github.com/janeczku/calibre-web/wiki/Configuration#basic-configuration) and [UI Configuration](https://github.com/janeczku/calibre-web/wiki/Configuration#ui-configuration) guides
#### Default admin login:
*Username:* admin\
*Password:* admin123
#### Default Admin Login:
- **Username:** admin
- **Password:** admin123
## Requirements
- Python 3.5+
- [Imagemagick](https://imagemagick.org/script/download.php) for cover extraction from EPUBs (Windows users may need to install [Ghostscript](https://ghostscript.com/releases/gsdnld.html) for PDF cover extraction)
- Optional: [Calibre desktop program](https://calibre-ebook.com/download) for on-the-fly conversion and metadata editing (set "calibre's converter tool" path on the setup page)
- Optional: [Kepubify tool](https://github.com/pgaskin/kepubify/releases/latest) for Kobo device support (place the binary in `/opt/kepubify` on Linux or `C:\Program Files\kepubify` on Windows)
## Docker Images
python 3.5+
Pre-built Docker images are available in the following Docker Hub repositories (maintained by the LinuxServer team):
Optionally, to enable on-the-fly conversion from one ebook format to another when using the send-to-kindle feature, or during editing of ebooks metadata:
#### **LinuxServer - x64, aarch64**
- [Docker Hub](https://hub.docker.com/r/linuxserver/calibre-web)
- [GitHub](https://github.com/linuxserver/docker-calibre-web)
- [GitHub - Optional Calibre layer](https://github.com/linuxserver/docker-mods/tree/universal-calibre)
[Download and install](https://calibre-ebook.com/download) the Calibre desktop program for your platform and enter the folder including program name (normally /opt/calibre/ebook-convert, or C:\Program Files\calibre\ebook-convert.exe) in the field "calibre's converter tool" on the setup page.
Include the environment variable `DOCKER_MODS=linuxserver/mods:universal-calibre` in your Docker run/compose file to add the Calibre `ebook-convert` binary (x64 only). Omit this variable for a lightweight image.
[Download](https://github.com/pgaskin/kepubify/releases/latest) Kepubify tool for your platform and place the binary starting with `kepubify` in Linux: `\opt\kepubify` Windows: `C:\Program Files\kepubify`.
Both the Calibre-Web and Calibre-Mod images are automatically rebuilt on new releases and updates.
## Docker Images
- Set "path to convertertool" to `/usr/bin/ebook-convert`
- Set "path to unrar" to `/usr/bin/unrar`
A pre-built Docker image is available in these Docker Hub repository (maintained by the LinuxServer team):
## Contributor Recognition
#### **LinuxServer - x64, armhf, aarch64**
+ Docker Hub - [https://hub.docker.com/r/linuxserver/calibre-web](https://hub.docker.com/r/linuxserver/calibre-web)
+ Github - [https://github.com/linuxserver/docker-calibre-web](https://github.com/linuxserver/docker-calibre-web)
+ Github - (Optional Calibre layer) - [https://github.com/linuxserver/docker-calibre-web/tree/calibre](https://github.com/linuxserver/docker-calibre-web/tree/calibre)
We would like to thank all the [contributors](https://github.com/janeczku/calibre-web/graphs/contributors) and maintainers of Calibre-Web for their valuable input and dedication to the project. Your contributions are greatly appreciated.
This image has the option to pull in an extra docker manifest layer to include the Calibre `ebook-convert` binary. Just include the environmental variable `DOCKER_MODS=linuxserver/calibre-web:calibre` in your docker run/docker compose file. **(x64 only)**
If you do not need this functionality then this can be omitted, keeping the image as lightweight as possible.
Both the Calibre-Web and Calibre-Mod images are rebuilt automatically on new releases of Calibre-Web and Calibre respectively, and on updates to any included base image packages on a weekly basis if required.
+ The "path to convertertool" should be set to `/usr/bin/ebook-convert`
+ The "path to unrar" should be set to `/usr/bin/unrar`
## Contact
# Contact
Join us on [Discord](https://discord.gg/h2VsJ2NEfB)
Just reach us out on [Discord](https://discord.gg/h2VsJ2NEfB)
For more information, How To's, and FAQs, please visit the [Wiki](https://github.com/janeczku/calibre-web/wiki)
For further information, How To's and FAQ please check the [Wiki](https://github.com/janeczku/calibre-web/wiki)
## Contributing to Calibre-Web
# Contributing to Calibre-Web
Check out our [Contributing Guidelines](https://github.com/janeczku/calibre-web/blob/master/CONTRIBUTING.md)
Please have a look at our [Contributing Guidelines](https://github.com/janeczku/calibre-web/blob/master/CONTRIBUTING.md)

@ -24,29 +24,18 @@ To receive fixes for security vulnerabilities it is required to always upgrade t
| V 0.6.13 | JavaScript could get executed in the shelf title ||
| V 0.6.13 | Login with the old session cookie after logout. Thanks to @ibarrionuevo ||
| V 0.6.14 | CSRF was possible. Thanks to @mik317 and Hagai Wechsler (WhiteSource) |CVE-2021-25965|
| V 0.6.14 | Migrated some routes to POST-requests (CSRF protection). Thanks to @scara31 |CVE-2021-4164|
| V 0.6.15 | Fix for "javascript:" script links in identifier. Thanks to @scara31 |CVE-2021-4170|
| V 0.6.14 | Migrated some routes to POST-requests (CSRF protection). Thanks to @scara31 ||
| V 0.6.15 | Fix for "javascript:" script links in identifier. Thanks to @scara31 ||
| V 0.6.15 | Cross-Site Scripting vulnerability on uploaded cover file names. Thanks to @ibarrionuevo ||
| V 0.6.15 | Creating public shelfs is now denied if user is missing the edit public shelf right. Thanks to @ibarrionuevo ||
| V 0.6.15 | Changed error message in case of trying to delete a shelf unauthorized. Thanks to @ibarrionuevo ||
| V 0.6.16 | JavaScript could get executed on authors page. Thanks to @alicaz |CVE-2022-0352|
| V 0.6.16 | Localhost can no longer be used to upload covers. Thanks to @scara31 |CVE-2022-0339|
| V 0.6.16 | Another case where public shelfs could be created without permission is prevented. Thanks to @nhiephon |CVE-2022-0273|
| V 0.6.16 | It's prevented to get the name of a private shelfs. Thanks to @nhiephon |CVE-2022-0405|
| V 0.6.17 | The SSRF Protection can no longer be bypassed via an HTTP redirect. Thanks to @416e6e61 |CVE-2022-0767|
| V 0.6.17 | The SSRF Protection can no longer be bypassed via 0.0.0.0 and it's ipv6 equivalent. Thanks to @r0hanSH |CVE-2022-0766|
| V 0.6.18 | Possible SQL Injection is prevented in user table Thanks to Iman Sharafaldin (Forward Security) |CVE-2022-30765|
| V 0.6.18 | The SSRF protection no longer can be bypassed by IPV6/IPV4 embedding. Thanks to @416e6e61 |CVE-2022-0939|
| V 0.6.18 | The SSRF protection no longer can be bypassed to connect to other servers in the local network. Thanks to @michaellrowley |CVE-2022-0990|
| V 0.6.20 | Credentials for emails are now stored encrypted ||
| V 0.6.20 | Login is rate limited ||
| V 0.6.20 | Passwordstrength can be forced ||
| V 0.6.21 | SMTP server credentials are no longer returned to client ||
| V 0.6.21 | Cross-site scripting (XSS) stored in href bypasses filter using data wrapper no longer possible ||
| V 0.6.21 | Cross-site scripting (XSS) is no longer possible via pathchooser ||
| V 0.6.21 | Error Handling at non existent rating, language, and user downloaded books was fixed ||
## Statement regarding Log4j (CVE-2021-44228 and related)
| V 0.6.16 | JavaScript could get executed on authors page. Thanks to @alicaz ||
| V 0.6.16 | Localhost can no longer be used to upload covers. Thanks to @scara31 ||
| V 0.6.16 | Another case where public shelfs could be created without permission is prevented. Thanks to @nhiephon ||
| V 0.6.17 | The SSRF Protection can no longer be bypassed via an HTTP redirect. Thanks to @416e6e61 ||
| V 0.6.17 | The SSRF Protection can no longer be bypassed via 0.0.0.0 and it's ipv6 equivalent. Thanks to @r0hanSH ||
## Staement regarding Log4j (CVE-2021-44228 and related)
Calibre-web is not affected by bugs related to Log4j. Calibre-Web is a python program, therefore not using Java, and not using the Java logging feature log4j.

@ -1,4 +1,3 @@
[python: **.py]
# has to be executed with jinja2 >=2.9 to have autoescape enabled automatically
[jinja2: **/templates/**.*ml]
extensions=jinja2.ext.autoescape,jinja2.ext.with_

@ -2,7 +2,7 @@
# -*- coding: utf-8 -*-
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
# Copyright (C) 2022 OzzieIsaacs
# Copyright (C) 2012-2019 OzzieIsaacs
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
@ -16,19 +16,72 @@
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
try:
from gevent import monkey
monkey.patch_all()
except ImportError:
pass
import os
import sys
import os
# Add local path to sys.path, so we can import cps
path = os.path.dirname(os.path.abspath(__file__))
sys.path.insert(0, path)
# Insert local directories into path
sys.path.append(os.path.dirname(os.path.abspath(__file__)))
sys.path.append(os.path.join(os.path.dirname(os.path.abspath(__file__)), 'vendor'))
from cps.main import main
if __name__ == '__main__':
main()
from cps import create_app
from cps import web_server
from cps.opds import opds
from cps.web import web
from cps.jinjia import jinjia
from cps.about import about
from cps.shelf import shelf
from cps.admin import admi
from cps.gdrive import gdrive
from cps.editbooks import editbook
from cps.remotelogin import remotelogin
from cps.search_metadata import meta
from cps.error_handler import init_errorhandler
try:
from cps.kobo import kobo, get_kobo_activated
from cps.kobo_auth import kobo_auth
kobo_available = get_kobo_activated()
except (ImportError, AttributeError): # Catch also error for not installed flask-WTF (missing csrf decorator)
kobo_available = False
try:
from cps.oauth_bb import oauth
oauth_available = True
except ImportError:
oauth_available = False
def main():
app = create_app()
init_errorhandler()
app.register_blueprint(web)
app.register_blueprint(opds)
app.register_blueprint(jinjia)
app.register_blueprint(about)
app.register_blueprint(shelf)
app.register_blueprint(admi)
app.register_blueprint(remotelogin)
app.register_blueprint(meta)
app.register_blueprint(gdrive)
app.register_blueprint(editbook)
if kobo_available:
app.register_blueprint(kobo)
app.register_blueprint(kobo_auth)
if oauth_available:
app.register_blueprint(oauth)
success = web_server.start()
sys.exit(0 if success else 1)
if __name__ == '__main__':
main()

@ -21,32 +21,15 @@
# along with this program. If not, see <http://www.gnu.org/licenses/>.
from flask_login import LoginManager, confirm_login
from flask import session, current_app
from flask_login.utils import decode_cookie
from flask_login.signals import user_loaded_from_cookie
from flask_login import LoginManager
from flask import session
class MyLoginManager(LoginManager):
def _session_protection_failed(self):
sess = session._get_current_object()
_session = session._get_current_object()
ident = self._session_identifier_generator()
if(sess and not (len(sess) == 1
and sess.get('csrf_token', None))) and ident != sess.get('_id', None):
if(_session and not (len(_session) == 1
and _session.get('csrf_token', None))) and ident != _session.get('_id', None):
return super(). _session_protection_failed()
return False
def _load_user_from_remember_cookie(self, cookie):
user_id = decode_cookie(cookie)
if user_id is not None:
session["_user_id"] = user_id
session["_fresh"] = False
user = None
if self._user_callback:
user = self._user_callback(user_id)
if user is not None:
app = current_app._get_current_object()
user_loaded_from_cookie.send(app, user=user)
# if session was restored from remember me cookie make login valid
confirm_login()
return user
return None

@ -25,34 +25,31 @@ import sys
import os
import mimetypes
from flask import Flask
from babel import Locale as LC
from babel import negotiate_locale
from babel.core import UnknownLocaleError
from flask import Flask, request, g
from .MyLoginManager import MyLoginManager
from flask_babel import Babel
from flask_principal import Principal
from . import logger
from .cli import CliParameter
from .constants import CONFIG_DIR
from . import config_sql, logger, cache_buster, cli, ub, db
from .reverseproxy import ReverseProxied
from .server import WebServer
from .dep_check import dependency_check
from .updater import Updater
from .babel import babel, get_locale
from . import config_sql
from . import cache_buster
from . import ub, db
try:
from flask_limiter import Limiter
limiter_present = True
import lxml
lxml_present = True
except ImportError:
limiter_present = False
lxml_present = False
try:
from flask_wtf.csrf import CSRFProtect
wtf_present = True
except ImportError:
wtf_present = False
mimetypes.init()
mimetypes.add_type('application/xhtml+xml', '.xhtml')
mimetypes.add_type('application/epub+zip', '.epub')
@ -64,8 +61,7 @@ mimetypes.add_type('application/x-mobi8-ebook', '.azw3')
mimetypes.add_type('application/x-cbr', '.cbr')
mimetypes.add_type('application/x-cbz', '.cbz')
mimetypes.add_type('application/x-cbt', '.cbt')
mimetypes.add_type('application/x-cb7', '.cb7')
mimetypes.add_type('image/vnd.djv', '.djv')
mimetypes.add_type('image/vnd.djvu', '.djvu')
mimetypes.add_type('application/mpeg', '.mpeg')
mimetypes.add_type('application/mpeg', '.mp3')
mimetypes.add_type('application/mp4', '.m4a')
@ -75,8 +71,6 @@ mimetypes.add_type('application/ogg', '.oga')
mimetypes.add_type('text/css', '.css')
mimetypes.add_type('text/javascript; charset=UTF-8', '.js')
log = logger.create()
app = Flask(__name__)
app.config.update(
SESSION_COOKIE_HTTPONLY=True,
@ -85,128 +79,116 @@ app.config.update(
WTF_CSRF_SSL_STRICT=False
)
lm = MyLoginManager()
cli_param = CliParameter()
config = config_sql.ConfigSQL()
lm = MyLoginManager()
lm.login_view = 'web.login'
lm.anonymous_user = ub.Anonymous
lm.session_protection = 'strong'
if wtf_present:
csrf = CSRFProtect()
csrf.init_app(app)
else:
csrf = None
calibre_db = db.CalibreDB()
ub.init_db(cli.settingspath)
# pylint: disable=no-member
config = config_sql.load_configuration(ub.session)
web_server = WebServer()
updater_thread = Updater()
if limiter_present:
limiter = Limiter(key_func=True, headers_enabled=True, auto_check=False, swallow_errors=False)
else:
limiter = None
babel = Babel()
_BABEL_TRANSLATIONS = set()
def create_app():
if csrf:
csrf.init_app(app)
log = logger.create()
cli_param.init()
ub.init_db(cli_param.settings_path)
# pylint: disable=no-member
encrypt_key, error = config_sql.get_encryption_key(os.path.dirname(cli_param.settings_path))
from . import services
config_sql.load_configuration(ub.session, encrypt_key)
config.init_config(ub.session, encrypt_key, cli_param)
db.CalibreDB.update_config(config)
db.CalibreDB.setup_db(config.config_calibre_dir, cli.settingspath)
if error:
log.error(error)
ub.password_change(cli_param.user_credentials)
calibre_db = db.CalibreDB()
def create_app():
if sys.version_info < (3, 0):
log.info(
'*** Python2 is EOL since end of 2019, this version of Calibre-Web is no longer supporting Python2, '
'please update your installation to Python3 ***')
'*** Python2 is EOL since end of 2019, this version of Calibre-Web is no longer supporting Python2, please update your installation to Python3 ***')
print(
'*** Python2 is EOL since end of 2019, this version of Calibre-Web is no longer supporting Python2, '
'please update your installation to Python3 ***')
'*** Python2 is EOL since end of 2019, this version of Calibre-Web is no longer supporting Python2, please update your installation to Python3 ***')
web_server.stop(True)
sys.exit(5)
lm.login_view = 'web.login'
lm.anonymous_user = ub.Anonymous
lm.session_protection = 'strong' if config.config_session == 1 else "basic"
db.CalibreDB.update_config(config)
db.CalibreDB.setup_db(config.config_calibre_dir, cli_param.settings_path)
calibre_db.init_db()
updater_thread.init_updater(config, web_server)
# Perform dry run of updater and exit afterward
if cli_param.dry_run:
updater_thread.dry_run()
sys.exit(0)
updater_thread.start()
requirements = dependency_check()
for res in requirements:
if res['found'] == "not installed":
message = ('Cannot import {name} module, it is needed to run calibre-web, '
'please install it using "pip install {name}"').format(name=res["name"])
log.info(message)
print("*** " + message + " ***")
web_server.stop(True)
sys.exit(8)
for res in requirements + dependency_check(True):
log.info('*** "{}" version does not meet the requirements. '
'Should: {}, Found: {}, please consider installing required version ***'
.format(res['name'],
res['target'],
res['found']))
if not lxml_present:
log.info('*** "lxml" is needed for calibre-web to run. Please install it using pip: "pip install lxml" ***')
print('*** "lxml" is needed for calibre-web to run. Please install it using pip: "pip install lxml" ***')
web_server.stop(True)
sys.exit(6)
if not wtf_present:
log.info('*** "flask-WTF" is needed for calibre-web to run. Please install it using pip: "pip install flask-WTF" ***')
print('*** "flask-WTF" is needed for calibre-web to run. Please install it using pip: "pip install flask-WTF" ***')
web_server.stop(True)
sys.exit(7)
for res in dependency_check() + dependency_check(True):
log.info('*** "{}" version does not fit the requirements. Should: {}, Found: {}, please consider installing required version ***'
.format(res['name'],
res['target'],
res['found']))
app.wsgi_app = ReverseProxied(app.wsgi_app)
if os.environ.get('FLASK_DEBUG'):
cache_buster.init_cache_busting(app)
log.info('Starting Calibre Web...')
Principal(app)
lm.init_app(app)
app.secret_key = os.getenv('SECRET_KEY', config_sql.get_flask_session_key(ub.session))
web_server.init_app(app, config)
if hasattr(babel, "localeselector"):
babel.init_app(app)
babel.localeselector(get_locale)
else:
babel.init_app(app, locale_selector=get_locale)
from . import services
babel.init_app(app)
_BABEL_TRANSLATIONS.update(str(item) for item in babel.list_translations())
_BABEL_TRANSLATIONS.add('en')
if services.ldap:
services.ldap.init_app(app, config)
if services.goodreads_support:
services.goodreads_support.connect(config.config_goodreads_api_key,
config.config_goodreads_api_secret,
config.config_use_goodreads)
config.store_calibre_uuid(calibre_db, db.Library_Id)
# Configure rate limiter
# https://limits.readthedocs.io/en/stable/storage.html
app.config.update(RATELIMIT_ENABLED=config.config_ratelimiter)
if config.config_limiter_uri != "" and not cli_param.memory_backend:
app.config.update(RATELIMIT_STORAGE_URI=config.config_limiter_uri)
if config.config_limiter_options != "":
app.config.update(RATELIMIT_STORAGE_OPTIONS=config.config_limiter_options)
try:
limiter.init_app(app)
except Exception as e:
log.error('Wrong Flask Limiter configuration, falling back to default: {}'.format(e))
app.config.update(RATELIMIT_STORAGE_URI=None)
limiter.init_app(app)
# Register scheduled tasks
from .schedule import register_scheduled_tasks, register_startup_tasks
register_scheduled_tasks(config.schedule_reconnect)
register_startup_tasks()
return app
@babel.localeselector
def get_locale():
# if a user is logged in, use the locale from the user settings
user = getattr(g, 'user', None)
if user is not None and hasattr(user, "locale"):
if user.name != 'Guest': # if the account is the guest account bypass the config lang settings
return user.locale
preferred = list()
if request.accept_languages:
for x in request.accept_languages.values():
try:
preferred.append(str(LC.parse(x.replace('-', '_'))))
except (UnknownLocaleError, ValueError) as e:
log.debug('Could not parse locale "%s": %s', x, e)
return negotiate_locale(preferred or ['en'], _BABEL_TRANSLATIONS)
@babel.timezoneselector
def get_timezone():
user = getattr(g, 'user', None)
return user.timezone if user else None
from .updater import Updater
updater_thread = Updater()
# Perform dry run of updater and exit afterwards
if cli.dry_run:
updater_thread.dry_run()
sys.exit(0)
updater_thread.start()

@ -25,6 +25,7 @@ import platform
import sqlite3
from collections import OrderedDict
import werkzeug
import flask
import flask_login
import jinja2
@ -36,40 +37,41 @@ from .render_template import render_title_template
about = flask.Blueprint('about', __name__)
modules = dict()
req = dep_check.load_dependencies(False)
opt = dep_check.load_dependencies(True)
ret = dict()
req = dep_check.load_dependencys(False)
opt = dep_check.load_dependencys(True)
for i in (req + opt):
modules[i[1]] = i[0]
modules['Jinja2'] = jinja2.__version__
modules['pySqlite'] = sqlite3.version
modules['SQLite'] = sqlite3.sqlite_version
sorted_modules = OrderedDict((sorted(modules.items(), key=lambda x: x[0].casefold())))
ret[i[1]] = i[0]
if constants.NIGHTLY_VERSION[0] == "$Format:%H$":
calibre_web_version = constants.STABLE_VERSION['version']
else:
calibre_web_version = (constants.STABLE_VERSION['version'] + ' - '
+ constants.NIGHTLY_VERSION[0].replace('%', '%%') + ' - '
+ constants.NIGHTLY_VERSION[1].replace('%', '%%'))
def collect_stats():
if constants.NIGHTLY_VERSION[0] == "$Format:%H$":
calibre_web_version = constants.STABLE_VERSION['version'].replace("b", " Beta")
else:
calibre_web_version = (constants.STABLE_VERSION['version'].replace("b", " Beta") + ' - '
+ constants.NIGHTLY_VERSION[0].replace('%', '%%') + ' - '
+ constants.NIGHTLY_VERSION[1].replace('%', '%%'))
if getattr(sys, 'frozen', False):
calibre_web_version += " - Exe-Version"
elif constants.HOME_CONFIG:
calibre_web_version += " - pyPi"
_VERSIONS = OrderedDict(
Platform='{0[0]} {0[2]} {0[3]} {0[4]} {0[5]}'.format(platform.uname()),
Python=sys.version,
Calibre_Web=calibre_web_version,
Werkzeug=werkzeug.__version__,
Jinja2=jinja2.__version__,
pySqlite=sqlite3.version,
SQLite=sqlite3.sqlite_version,
)
_VERSIONS.update(ret)
_VERSIONS.update(uploader.get_versions(False))
if getattr(sys, 'frozen', False):
calibre_web_version += " - Exe-Version"
elif constants.HOME_CONFIG:
calibre_web_version += " - pyPi"
_VERSIONS = {'Calibre Web': calibre_web_version}
_VERSIONS.update(OrderedDict(
Python=sys.version,
Platform='{0[0]} {0[2]} {0[3]} {0[4]} {0[5]}'.format(platform.uname()),
))
_VERSIONS.update(uploader.get_magick_version())
_VERSIONS['Unrar'] = converter.get_unrar_version()
_VERSIONS['Ebook converter'] = converter.get_calibre_version()
_VERSIONS['Kepubify'] = converter.get_kepubify_version()
_VERSIONS.update(sorted_modules)
def collect_stats():
_VERSIONS['ebook converter'] = _(converter.get_calibre_version())
_VERSIONS['unrar'] = _(converter.get_unrar_version())
_VERSIONS['kepubify'] = _(converter.get_kepubify_version())
return _VERSIONS
@ -78,7 +80,7 @@ def collect_stats():
def stats():
counter = calibre_db.session.query(db.Books).count()
authors = calibre_db.session.query(db.Authors).count()
categories = calibre_db.session.query(db.Tags).count()
categorys = calibre_db.session.query(db.Tags).count()
series = calibre_db.session.query(db.Series).count()
return render_title_template('stats.html', bookcounter=counter, authorcounter=authors, versions=collect_stats(),
categorycounter=categories, seriecounter=series, title=_("Statistics"), page="stat")
categorycounter=categorys, seriecounter=series, title=_(u"Statistics"), page="stat")

File diff suppressed because it is too large Load Diff

@ -1,40 +0,0 @@
from babel import negotiate_locale
from flask_babel import Babel, Locale
from babel.core import UnknownLocaleError
from flask import request
from flask_login import current_user
from . import logger
log = logger.create()
babel = Babel()
def get_locale():
# if a user is logged in, use the locale from the user settings
if current_user is not None and hasattr(current_user, "locale"):
# if the account is the guest account bypass the config lang settings
if current_user.name != 'Guest':
return current_user.locale
preferred = list()
if request.accept_languages:
for x in request.accept_languages.values():
try:
preferred.append(str(Locale.parse(x.replace('-', '_'))))
except (UnknownLocaleError, ValueError) as e:
log.debug('Could not parse locale "%s": %s', x, e)
return negotiate_locale(preferred or ['en'], get_available_translations())
def get_user_locale_language(user_language):
return Locale.parse(user_language).get_language_name(get_locale())
def get_available_locale():
return [Locale('en')] + babel.list_translations()
def get_available_translations():
return set(str(item) for item in get_available_locale())

@ -47,23 +47,20 @@ def init_cache_busting(app):
for filename in filenames:
# compute version component
rooted_filename = os.path.join(dirpath, filename)
try:
with open(rooted_filename, 'rb') as f:
file_hash = hashlib.md5(f.read()).hexdigest()[:7] # nosec
# save version to tables
file_path = rooted_filename.replace(static_folder, "")
file_path = file_path.replace("\\", "/") # Convert Windows path to web path
hash_table[file_path] = file_hash
except PermissionError:
log.error("No permission to access {} file.".format(rooted_filename))
with open(rooted_filename, 'rb') as f:
file_hash = hashlib.md5(f.read()).hexdigest()[:7] # nosec
# save version to tables
file_path = rooted_filename.replace(static_folder, "")
file_path = file_path.replace("\\", "/") # Convert Windows path to web path
hash_table[file_path] = file_hash
log.debug('Finished computing cache-busting values')
def bust_filename(file_name):
return hash_table.get(file_name, "")
def bust_filename(filename):
return hash_table.get(filename, "")
def unbust_filename(file_name):
return file_name.split("?", 1)[0]
def unbust_filename(filename):
return filename.split("?", 1)[0]
@app.url_defaults
# pylint: disable=unused-variable

@ -1,53 +0,0 @@
# -*- coding: utf-8 -*-
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
# Copyright (C) 2018-2019 OzzieIsaacs
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
from . import logger
from lxml.etree import ParserError
try:
# at least bleach 6.0 is needed -> incomplatible change from list arguments to set arguments
from bleach import clean_text as clean_html
BLEACH = True
except ImportError:
try:
BLEACH = False
from nh3 import clean as clean_html
except ImportError:
try:
BLEACH = False
from lxml.html.clean import clean_html
except ImportError:
clean_html = None
log = logger.create()
def clean_string(unsafe_text, book_id=0):
try:
if BLEACH:
safe_text = clean_html(unsafe_text, tags=set(), attributes=set())
else:
safe_text = clean_html(unsafe_text)
except ParserError as e:
log.error("Comments of book {} are corrupted: {}".format(book_id, e))
safe_text = ""
except TypeError as e:
log.error("Comments can't be parsed, maybe 'lxml' is too new, try installing 'bleach': {}".format(e))
safe_text = ""
return safe_text

@ -24,112 +24,92 @@ import socket
from .constants import CONFIG_DIR as _CONFIG_DIR
from .constants import STABLE_VERSION as _STABLE_VERSION
from .constants import NIGHTLY_VERSION as _NIGHTLY_VERSION
from .constants import DEFAULT_SETTINGS_FILE, DEFAULT_GDRIVE_FILE
def version_info():
if _NIGHTLY_VERSION[1].startswith('$Format'):
return "Calibre-Web version: %s - unknown git-clone" % _STABLE_VERSION['version'].replace("b", " Beta")
return "Calibre-Web version: %s -%s" % (_STABLE_VERSION['version'].replace("b", " Beta"), _NIGHTLY_VERSION[1])
class CliParameter(object):
def init(self):
self.arg_parser()
def arg_parser(self):
parser = argparse.ArgumentParser(description='Calibre Web is a web app providing '
'a interface for browsing, reading and downloading eBooks\n',
prog='cps.py')
parser.add_argument('-p', metavar='path', help='path and name to settings db, e.g. /opt/cw.db')
parser.add_argument('-g', metavar='path', help='path and name to gdrive db, e.g. /opt/gd.db')
parser.add_argument('-c', metavar='path', help='path and name to SSL certfile, e.g. /opt/test.cert, '
'works only in combination with keyfile')
parser.add_argument('-k', metavar='path', help='path and name to SSL keyfile, e.g. /opt/test.key, '
'works only in combination with certfile')
parser.add_argument('-o', metavar='path', help='path and name Calibre-Web logfile')
parser.add_argument('-v', '--version', action='version', help='Shows version number and exits Calibre-Web',
version=version_info())
parser.add_argument('-i', metavar='ip-address', help='Server IP-Address to listen')
parser.add_argument('-m', action='store_true', help='Use Memory-backend as limiter backend, use this parameter in case of miss configured backend')
parser.add_argument('-s', metavar='user:pass',
help='Sets specific username to new password and exits Calibre-Web')
parser.add_argument('-f', action='store_true', help='Flag is depreciated and will be removed in next version')
parser.add_argument('-l', action='store_true', help='Allow loading covers from localhost')
parser.add_argument('-d', action='store_true', help='Dry run of updater to check file permissions in advance '
'and exits Calibre-Web')
parser.add_argument('-r', action='store_true', help='Enable public database reconnect route under /reconnect')
args = parser.parse_args()
self.logpath = args.o or ""
self.settings_path = args.p or os.path.join(_CONFIG_DIR, DEFAULT_SETTINGS_FILE)
self.gd_path = args.g or os.path.join(_CONFIG_DIR, DEFAULT_GDRIVE_FILE)
if os.path.isdir(self.settings_path):
self.settings_path = os.path.join(self.settings_path, DEFAULT_SETTINGS_FILE)
if os.path.isdir(self.gd_path):
self.gd_path = os.path.join(self.gd_path, DEFAULT_GDRIVE_FILE)
# handle and check parameter for ssl encryption
self.certfilepath = None
self.keyfilepath = None
if args.c:
if os.path.isfile(args.c):
self.certfilepath = args.c
return "Calibre-Web version: %s - unkown git-clone" % _STABLE_VERSION['version']
return "Calibre-Web version: %s -%s" % (_STABLE_VERSION['version'], _NIGHTLY_VERSION[1])
parser = argparse.ArgumentParser(description='Calibre Web is a web app'
' providing a interface for browsing, reading and downloading eBooks\n', prog='cps.py')
parser.add_argument('-p', metavar='path', help='path and name to settings db, e.g. /opt/cw.db')
parser.add_argument('-g', metavar='path', help='path and name to gdrive db, e.g. /opt/gd.db')
parser.add_argument('-c', metavar='path',
help='path and name to SSL certfile, e.g. /opt/test.cert, works only in combination with keyfile')
parser.add_argument('-k', metavar='path',
help='path and name to SSL keyfile, e.g. /opt/test.key, works only in combination with certfile')
parser.add_argument('-v', '--version', action='version', help='Shows version number and exits Calibre-Web',
version=version_info())
parser.add_argument('-i', metavar='ip-address', help='Server IP-Address to listen')
parser.add_argument('-s', metavar='user:pass', help='Sets specific username to new password and exits Calibre-Web')
parser.add_argument('-f', action='store_true', help='Flag is depreciated and will be removed in next version')
parser.add_argument('-l', action='store_true', help='Allow loading covers from localhost')
parser.add_argument('-d', action='store_true', help='Dry run of updater to check file permissions in advance '
'and exits Calibre-Web')
parser.add_argument('-r', action='store_true', help='Enable public database reconnect route under /reconnect')
args = parser.parse_args()
settingspath = args.p or os.path.join(_CONFIG_DIR, "app.db")
gdpath = args.g or os.path.join(_CONFIG_DIR, "gdrive.db")
# handle and check parameter for ssl encryption
certfilepath = None
keyfilepath = None
if args.c:
if os.path.isfile(args.c):
certfilepath = args.c
else:
print("Certfile path is invalid. Exiting...")
sys.exit(1)
if args.c == "":
certfilepath = ""
if args.k:
if os.path.isfile(args.k):
keyfilepath = args.k
else:
print("Keyfile path is invalid. Exiting...")
sys.exit(1)
if (args.k and not args.c) or (not args.k and args.c):
print("Certfile and Keyfile have to be used together. Exiting...")
sys.exit(1)
if args.k == "":
keyfilepath = ""
# dry run updater
dry_run = args.d or None
# load covers from localhost
allow_localhost = args.l or None
# handle and check ip address argument
ip_address = args.i or None
if ip_address:
try:
# try to parse the given ip address with socket
if hasattr(socket, 'inet_pton'):
if ':' in ip_address:
socket.inet_pton(socket.AF_INET6, ip_address)
else:
print("Certfile path is invalid. Exiting...")
sys.exit(1)
socket.inet_pton(socket.AF_INET, ip_address)
else:
# on windows python < 3.4, inet_pton is not available
# inet_atom only handles IPv4 addresses
socket.inet_aton(ip_address)
except socket.error as err:
print(ip_address, ':', err)
sys.exit(1)
# handle and check user password argument
user_credentials = args.s or None
if user_credentials and ":" not in user_credentials:
print("No valid 'username:password' format")
sys.exit(3)
if args.f:
print("Warning: -f flag is depreciated and will be removed in next version")
if args.c == "":
self.certfilepath = ""
if args.k:
if os.path.isfile(args.k):
self.keyfilepath = args.k
else:
print("Keyfile path is invalid. Exiting...")
sys.exit(1)
if (args.k and not args.c) or (not args.k and args.c):
print("Certfile and Keyfile have to be used together. Exiting...")
sys.exit(1)
if args.k == "":
self.keyfilepath = ""
# overwrite limiter backend
self.memory_backend = args.m or None
# dry run updater
self.dry_run = args.d or None
# enable reconnect endpoint for docker database reconnect
self.reconnect_enable = args.r or os.environ.get("CALIBRE_RECONNECT", None)
# load covers from localhost
self.allow_localhost = args.l or os.environ.get("CALIBRE_LOCALHOST", None)
# handle and check ip address argument
self.ip_address = args.i or None
if self.ip_address:
try:
# try to parse the given ip address with socket
if hasattr(socket, 'inet_pton'):
if ':' in self.ip_address:
socket.inet_pton(socket.AF_INET6, self.ip_address)
else:
socket.inet_pton(socket.AF_INET, self.ip_address)
else:
# on Windows python < 3.4, inet_pton is not available
# inet_atom only handles IPv4 addresses
socket.inet_aton(self.ip_address)
except socket.error as err:
print(self.ip_address, ':', err)
sys.exit(1)
# handle and check user password argument
self.user_credentials = args.s or None
if self.user_credentials and ":" not in self.user_credentials:
print("No valid 'username:password' format")
sys.exit(3)
if args.f:
print("Warning: -f flag is depreciated and will be removed in next version")

@ -1,7 +1,7 @@
# -*- coding: utf-8 -*-
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
# Copyright (C) 2018-2022 OzzieIsaacs
# Copyright (C) 2018 OzzieIsaacs
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
@ -18,16 +18,19 @@
import os
from . import logger, isoLanguages, cover
from . import logger, isoLanguages
from .constants import BookMeta
log = logger.create()
try:
from wand.image import Image
use_IM = True
except (ImportError, RuntimeError) as e:
use_IM = False
log = logger.create()
try:
from comicapi.comicarchive import ComicArchive, MetaDataStyle
@ -36,12 +39,6 @@ try:
from comicapi import __version__ as comic_version
except ImportError:
comic_version = ''
try:
from comicapi.comicarchive import load_archive_plugins
import comicapi.utils
comicapi.utils.add_rar_paths()
except ImportError:
load_archive_plugins = None
except (ImportError, LookupError) as e:
log.debug('Cannot import comicapi, extracting comic metadata will not work: %s', e)
import zipfile
@ -52,16 +49,31 @@ except (ImportError, LookupError) as e:
except (ImportError, SyntaxError) as e:
log.debug('Cannot import rarfile, extracting cover files from rar files will not work: %s', e)
use_rarfile = False
try:
import py7zr
use_7zip = True
except (ImportError, SyntaxError) as e:
log.debug('Cannot import py7zr, extracting cover files from CB7 files will not work: %s', e)
use_7zip = False
use_comic_meta = False
NO_JPEG_EXTENSIONS = ['.png', '.webp', '.bmp']
COVER_EXTENSIONS = ['.png', '.webp', '.bmp', '.jpg', '.jpeg']
def _cover_processing(tmp_file_name, img, extension):
tmp_cover_name = os.path.join(os.path.dirname(tmp_file_name), 'cover.jpg')
if extension in NO_JPEG_EXTENSIONS:
if use_IM:
with Image(blob=img) as imgc:
imgc.format = 'jpeg'
imgc.transform_colorspace('rgb')
imgc.save(filename=tmp_cover_name)
return tmp_cover_name
else:
return None
if img:
with open(tmp_cover_name, 'wb') as f:
f.write(img)
return tmp_cover_name
else:
return None
def _extract_cover_from_archive(original_file_extension, tmp_file_name, rar_executable):
def _extract_Cover_from_archive(original_file_extension, tmp_file_name, rarExecutable):
cover_data = extension = None
if original_file_extension.upper() == '.CBZ':
cf = zipfile.ZipFile(tmp_file_name)
@ -69,7 +81,7 @@ def _extract_cover_from_archive(original_file_extension, tmp_file_name, rar_exec
ext = os.path.splitext(name)
if len(ext) > 1:
extension = ext[1].lower()
if extension in cover.COVER_EXTENSIONS:
if extension in COVER_EXTENSIONS:
cover_data = cf.read(name)
break
elif original_file_extension.upper() == '.CBT':
@ -78,111 +90,81 @@ def _extract_cover_from_archive(original_file_extension, tmp_file_name, rar_exec
ext = os.path.splitext(name)
if len(ext) > 1:
extension = ext[1].lower()
if extension in cover.COVER_EXTENSIONS:
if extension in COVER_EXTENSIONS:
cover_data = cf.extractfile(name).read()
break
elif original_file_extension.upper() == '.CBR' and use_rarfile:
try:
rarfile.UNRAR_TOOL = rar_executable
rarfile.UNRAR_TOOL = rarExecutable
cf = rarfile.RarFile(tmp_file_name)
for name in cf.namelist():
ext = os.path.splitext(name)
if len(ext) > 1:
extension = ext[1].lower()
if extension in cover.COVER_EXTENSIONS:
cover_data = cf.read([name])
if extension in COVER_EXTENSIONS:
cover_data = cf.read(name)
break
except Exception as ex:
log.error('Rarfile failed with error: {}'.format(ex))
elif original_file_extension.upper() == '.CB7' and use_7zip:
cf = py7zr.SevenZipFile(tmp_file_name)
for name in cf.getnames():
ext = os.path.splitext(name)
if len(ext) > 1:
extension = ext[1].lower()
if extension in cover.COVER_EXTENSIONS:
try:
cover_data = cf.read([name])[name].read()
except (py7zr.Bad7zFile, OSError) as ex:
log.error('7Zip file failed with error: {}'.format(ex))
break
log.debug('Rarfile failed with error: %s', ex)
return cover_data, extension
def _extract_cover(tmp_file_name, original_file_extension, rar_executable):
def _extractCover(tmp_file_name, original_file_extension, rarExecutable):
cover_data = extension = None
if use_comic_meta:
try:
archive = ComicArchive(tmp_file_name, rar_exe_path=rar_executable)
except TypeError:
archive = ComicArchive(tmp_file_name)
name_list = archive.getPageNameList if hasattr(archive, "getPageNameList") else archive.get_page_name_list
for index, name in enumerate(name_list()):
archive = ComicArchive(tmp_file_name, rar_exe_path=rarExecutable)
for index, name in enumerate(archive.getPageNameList()):
ext = os.path.splitext(name)
if len(ext) > 1:
extension = ext[1].lower()
if extension in cover.COVER_EXTENSIONS:
get_page = archive.getPage if hasattr(archive, "getPageNameList") else archive.get_page
cover_data = get_page(index)
if extension in COVER_EXTENSIONS:
cover_data = archive.getPage(index)
break
else:
cover_data, extension = _extract_cover_from_archive(original_file_extension, tmp_file_name, rar_executable)
return cover.cover_processing(tmp_file_name, cover_data, extension)
cover_data, extension = _extract_Cover_from_archive(original_file_extension, tmp_file_name, rarExecutable)
return _cover_processing(tmp_file_name, cover_data, extension)
def get_comic_info(tmp_file_path, original_file_name, original_file_extension, rar_executable):
def get_comic_info(tmp_file_path, original_file_name, original_file_extension, rarExecutable):
if use_comic_meta:
try:
archive = ComicArchive(tmp_file_path, rar_exe_path=rar_executable)
except TypeError:
load_archive_plugins(force=True, rar=rar_executable)
archive = ComicArchive(tmp_file_path)
if hasattr(archive, "seemsToBeAComicArchive"):
seems_archive = archive.seemsToBeAComicArchive
else:
seems_archive = archive.seems_to_be_a_comic_archive
if seems_archive():
has_metadata = archive.hasMetadata if hasattr(archive, "hasMetadata") else archive.has_metadata
if has_metadata(MetaDataStyle.CIX):
archive = ComicArchive(tmp_file_path, rar_exe_path=rarExecutable)
if archive.seemsToBeAComicArchive():
if archive.hasMetadata(MetaDataStyle.CIX):
style = MetaDataStyle.CIX
elif has_metadata(MetaDataStyle.CBI):
elif archive.hasMetadata(MetaDataStyle.CBI):
style = MetaDataStyle.CBI
else:
style = None
read_metadata = archive.readMetadata if hasattr(archive, "readMetadata") else archive.read_metadata
loaded_metadata = read_metadata(style)
# if style is not None:
loadedMetadata = archive.readMetadata(style)
lang = loaded_metadata.language or ""
loaded_metadata.language = isoLanguages.get_lang3(lang)
lang = loadedMetadata.language or ""
loadedMetadata.language = isoLanguages.get_lang3(lang)
return BookMeta(
file_path=tmp_file_path,
extension=original_file_extension,
title=loaded_metadata.title or original_file_name,
title=loadedMetadata.title or original_file_name,
author=" & ".join([credit["person"]
for credit in loaded_metadata.credits if credit["role"] == "Writer"]) or 'Unknown',
cover=_extract_cover(tmp_file_path, original_file_extension, rar_executable),
description=loaded_metadata.comments or "",
for credit in loadedMetadata.credits if credit["role"] == "Writer"]) or u'Unknown',
cover=_extractCover(tmp_file_path, original_file_extension, rarExecutable),
description=loadedMetadata.comments or "",
tags="",
series=loaded_metadata.series or "",
series_id=loaded_metadata.issue or "",
languages=loaded_metadata.language,
publisher="",
pubdate="",
identifiers=[])
series=loadedMetadata.series or "",
series_id=loadedMetadata.issue or "",
languages=loadedMetadata.language,
publisher="")
return BookMeta(
file_path=tmp_file_path,
extension=original_file_extension,
title=original_file_name,
author='Unknown',
cover=_extract_cover(tmp_file_path, original_file_extension, rar_executable),
author=u'Unknown',
cover=_extractCover(tmp_file_path, original_file_extension, rarExecutable),
description="",
tags="",
series="",
series_id="",
languages="",
publisher="",
pubdate="",
identifiers=[])
publisher="")

@ -23,24 +23,18 @@ import json
from sqlalchemy import Column, String, Integer, SmallInteger, Boolean, BLOB, JSON
from sqlalchemy.exc import OperationalError
from sqlalchemy.sql.expression import text
from sqlalchemy import exists
from cryptography.fernet import Fernet
import cryptography.exceptions
from base64 import urlsafe_b64decode
try:
# Compatibility with sqlalchemy 2.0
from sqlalchemy.orm import declarative_base
except ImportError:
from sqlalchemy.ext.declarative import declarative_base
from . import constants, logger
from .subproc_wrapper import process_wait
from . import constants, cli, logger
log = logger.create()
_Base = declarative_base()
class _Flask_Settings(_Base):
__tablename__ = 'flask_settings'
@ -61,8 +55,7 @@ class _Settings(_Base):
mail_port = Column(Integer, default=25)
mail_use_ssl = Column(SmallInteger, default=0)
mail_login = Column(String, default='mail@example.com')
mail_password_e = Column(String)
mail_password = Column(String)
mail_password = Column(String, default='mypassword')
mail_from = Column(String, default='automailer <mail@example.com>')
mail_size = Column(Integer, default=25*1024*1024)
mail_server_type = Column(SmallInteger, default=0)
@ -70,25 +63,24 @@ class _Settings(_Base):
config_calibre_dir = Column(String)
config_calibre_uuid = Column(String)
config_calibre_split = Column(Boolean, default=False)
config_calibre_split_dir = Column(String)
config_port = Column(Integer, default=constants.DEFAULT_PORT)
config_external_port = Column(Integer, default=constants.DEFAULT_PORT)
config_certfile = Column(String)
config_keyfile = Column(String)
config_trustedhosts = Column(String, default='')
config_calibre_web_title = Column(String, default='Calibre-Web')
config_trustedhosts = Column(String,default='')
config_calibre_web_title = Column(String, default=u'Calibre-Web')
config_books_per_page = Column(Integer, default=60)
config_random_books = Column(Integer, default=4)
config_authors_max = Column(Integer, default=0)
config_read_column = Column(Integer, default=0)
config_title_regex = Column(String, default=r'^(A|The|An|Der|Die|Das|Den|Ein|Eine|Einen|Dem|Des|Einem|Eines|Le|La|Les|L\'|Un|Une)\s+')
config_title_regex = Column(String, default=r'^(A|The|An|Der|Die|Das|Den|Ein|Eine|Einen|Dem|Des|Einem|Eines)\s+')
config_mature_content_tags = Column(String, default='')
config_theme = Column(Integer, default=0)
config_log_level = Column(SmallInteger, default=logger.DEFAULT_LOG_LEVEL)
config_logfile = Column(String, default=logger.DEFAULT_LOG_FILE)
config_logfile = Column(String)
config_access_log = Column(SmallInteger, default=0)
config_access_logfile = Column(String, default=logger.DEFAULT_ACCESS_LOG)
config_access_logfile = Column(String)
config_uploading = Column(SmallInteger, default=0)
config_anonbrowse = Column(SmallInteger, default=0)
@ -114,6 +106,7 @@ class _Settings(_Base):
config_use_goodreads = Column(Boolean, default=False)
config_goodreads_api_key = Column(String)
config_goodreads_api_secret = Column(String)
config_register_email = Column(Boolean, default=False)
config_login_type = Column(Integer, default=0)
@ -123,15 +116,14 @@ class _Settings(_Base):
config_ldap_port = Column(SmallInteger, default=389)
config_ldap_authentication = Column(SmallInteger, default=constants.LDAP_AUTH_SIMPLE)
config_ldap_serv_username = Column(String, default='cn=admin,dc=example,dc=org')
config_ldap_serv_password_e = Column(String)
config_ldap_serv_password = Column(String)
config_ldap_serv_password = Column(String, default="")
config_ldap_encryption = Column(SmallInteger, default=0)
config_ldap_cacert_path = Column(String, default="")
config_ldap_cert_path = Column(String, default="")
config_ldap_key_path = Column(String, default="")
config_ldap_dn = Column(String, default='dc=example,dc=org')
config_ldap_user_object = Column(String, default='uid=%s')
config_ldap_member_user_object = Column(String, default='')
config_ldap_member_user_object = Column(String, default='') #
config_ldap_openldap = Column(Boolean, default=True)
config_ldap_group_object_filter = Column(String, default='(&(objectclass=posixGroup)(cn=%s))')
config_ldap_group_members_field = Column(String, default='memberUid')
@ -139,64 +131,37 @@ class _Settings(_Base):
config_kepubifypath = Column(String, default=None)
config_converterpath = Column(String, default=None)
config_binariesdir = Column(String, default=None)
config_calibre = Column(String)
config_rarfile_location = Column(String, default=None)
config_upload_formats = Column(String, default=','.join(constants.EXTENSIONS_UPLOAD))
config_unicode_filename = Column(Boolean, default=False)
config_embed_metadata = Column(Boolean, default=True)
config_unicode_filename =Column(Boolean, default=False)
config_updatechannel = Column(Integer, default=constants.UPDATE_STABLE)
config_reverse_proxy_login_header_name = Column(String)
config_allow_reverse_proxy_header_login = Column(Boolean, default=False)
schedule_start_time = Column(Integer, default=4)
schedule_duration = Column(Integer, default=10)
schedule_generate_book_covers = Column(Boolean, default=False)
schedule_generate_series_covers = Column(Boolean, default=False)
schedule_reconnect = Column(Boolean, default=False)
schedule_metadata_backup = Column(Boolean, default=False)
config_password_policy = Column(Boolean, default=True)
config_password_min_length = Column(Integer, default=8)
config_password_number = Column(Boolean, default=True)
config_password_lower = Column(Boolean, default=True)
config_password_upper = Column(Boolean, default=True)
config_password_character = Column(Boolean, default=True)
config_password_special = Column(Boolean, default=True)
config_session = Column(Integer, default=1)
config_ratelimiter = Column(Boolean, default=True)
config_limiter_uri = Column(String, default="")
config_limiter_options = Column(String, default="")
def __repr__(self):
return self.__class__.__name__
# Class holds all application specific settings in calibre-web
class ConfigSQL(object):
class _ConfigSQL(object):
# pylint: disable=no-member
def __init__(self):
self.__dict__["dirty"] = list()
def init_config(self, session, secret_key, cli):
def __init__(self, session):
self._session = session
self._settings = None
self.db_configured = None
self.config_calibre_dir = None
self._fernet = Fernet(secret_key)
self.cli = cli
self.load()
change = False
if self.config_binariesdir == None: # pylint: disable=access-member-before-definition
if self.config_converterpath == None: # pylint: disable=access-member-before-definition
change = True
self.config_binariesdir = autodetect_calibre_binaries()
self.config_converterpath = autodetect_converter_binary(self.config_binariesdir)
self.config_converterpath = autodetect_calibre_binary()
if self.config_kepubifypath == None: # pylint: disable=access-member-before-definition
change = True
self.config_kepubifypath = autodetect_kepubify_binary()
@ -206,6 +171,7 @@ class ConfigSQL(object):
if change:
self.save()
def _read_from_storage(self):
if self._settings is None:
log.debug("_ConfigSQL._read_from_storage")
@ -213,21 +179,22 @@ class ConfigSQL(object):
return self._settings
def get_config_certfile(self):
if self.cli.certfilepath:
return self.cli.certfilepath
if self.cli.certfilepath == "":
if cli.certfilepath:
return cli.certfilepath
if cli.certfilepath == "":
return None
return self.config_certfile
def get_config_keyfile(self):
if self.cli.keyfilepath:
return self.cli.keyfilepath
if self.cli.certfilepath == "":
if cli.keyfilepath:
return cli.keyfilepath
if cli.certfilepath == "":
return None
return self.config_keyfile
def get_config_ipaddress(self):
return self.cli.ip_address or ""
@staticmethod
def get_config_ipaddress():
return cli.ip_address or ""
def _has_role(self, role_flag):
return constants.has_flag(self.config_default_role, role_flag)
@ -282,14 +249,12 @@ class ConfigSQL(object):
return logger.get_level_name(self.config_log_level)
def get_mail_settings(self):
return {k: v for k, v in self.__dict__.items() if k.startswith('mail_')}
return {k:v for k, v in self.__dict__.items() if k.startswith('mail_')}
def get_mail_server_configured(self):
return bool((self.mail_server != constants.DEFAULT_MAIL_SERVER and self.mail_server_type == 0)
or (self.mail_gmail_token != {} and self.mail_server_type == 1))
def get_scheduled_task_settings(self):
return {k: v for k, v in self.__dict__.items() if k.startswith('schedule_')}
def set_from_dictionary(self, dictionary, field, convertor=None, default=None, encode=None):
"""Possibly updates a field of this object.
@ -318,15 +283,16 @@ class ConfigSQL(object):
setattr(self, field, new_value)
return True
def to_dict(self):
def toDict(self):
storage = {}
for k, v in self.__dict__.items():
if k[0] != '_' and not k.endswith("_e") and not k == "cli":
if k[0] != '_' and not k.endswith("password") and not k.endswith("secret"):
storage[k] = v
return storage
def load(self):
"""Load all configuration values from the underlying storage."""
'''Load all configuration values from the underlying storage.'''
s = self._read_from_storage() # type: _Settings
for k, v in s.__dict__.items():
if k[0] != '_':
@ -335,13 +301,7 @@ class ConfigSQL(object):
column = s.__class__.__dict__.get(k)
if column.default is not None:
v = column.default.arg
if k.endswith("_e") and v is not None:
try:
setattr(self, k, self._fernet.decrypt(v).decode())
except cryptography.fernet.InvalidToken:
setattr(self, k, "")
else:
setattr(self, k, v)
setattr(self, k, v)
have_metadata_db = bool(self.config_calibre_dir)
if have_metadata_db:
@ -349,37 +309,30 @@ class ConfigSQL(object):
have_metadata_db = os.path.isfile(db_file)
self.db_configured = have_metadata_db
constants.EXTENSIONS_UPLOAD = [x.lstrip().rstrip().lower() for x in self.config_upload_formats.split(',')]
from . import cli_param
if os.environ.get('FLASK_DEBUG'):
logfile = logger.setup(logger.LOG_TO_STDOUT, logger.logging.DEBUG)
else:
# pylint: disable=access-member-before-definition
logfile = logger.setup(cli_param.logpath or self.config_logfile, self.config_log_level)
if logfile != os.path.abspath(self.config_logfile):
if logfile != os.path.abspath(cli_param.logpath):
log.warning("Log path %s not valid, falling back to default", self.config_logfile)
logfile = logger.setup(self.config_logfile, self.config_log_level)
if logfile != self.config_logfile:
log.warning("Log path %s not valid, falling back to default", self.config_logfile)
self.config_logfile = logfile
s.config_logfile = logfile
self._session.merge(s)
try:
self._session.commit()
except OperationalError as e:
log.error('Database error: %s', e)
self._session.rollback()
self.__dict__["dirty"] = list()
def save(self):
"""Apply all configuration values to the underlying storage."""
'''Apply all configuration values to the underlying storage.'''
s = self._read_from_storage() # type: _Settings
for k in self.dirty:
for k, v in self.__dict__.items():
if k[0] == '_':
continue
if hasattr(s, k):
if k.endswith("_e"):
setattr(s, k, self._fernet.encrypt(self.__dict__[k].encode()))
else:
setattr(s, k, self.__dict__[k])
setattr(s, k, v)
log.debug("_ConfigSQL updating storage")
self._session.merge(s)
@ -395,11 +348,9 @@ class ConfigSQL(object):
log.error(error)
log.warning("invalidating configuration")
self.db_configured = False
# self.config_calibre_dir = None
self.save()
def get_book_path(self):
return self.config_calibre_split_dir if self.config_calibre_split_dir else self.config_calibre_dir
def store_calibre_uuid(self, calibre_db, Library_table):
try:
calibre_uuid = calibre_db.session.query(Library_table).one_or_none()
@ -409,34 +360,7 @@ class ConfigSQL(object):
except AttributeError:
pass
def __setattr__(self, attr_name, attr_value):
super().__setattr__(attr_name, attr_value)
self.__dict__["dirty"].append(attr_name)
def _encrypt_fields(session, secret_key):
try:
session.query(exists().where(_Settings.mail_password_e)).scalar()
except OperationalError:
with session.bind.connect() as conn:
conn.execute(text("ALTER TABLE settings ADD column 'mail_password_e' String"))
conn.execute(text("ALTER TABLE settings ADD column 'config_ldap_serv_password_e' String"))
session.commit()
crypter = Fernet(secret_key)
settings = session.query(_Settings.mail_password, _Settings.config_ldap_serv_password).first()
if settings.mail_password:
session.query(_Settings).update(
{_Settings.mail_password_e: crypter.encrypt(settings.mail_password.encode())})
if settings.config_ldap_serv_password:
session.query(_Settings).update(
{_Settings.config_ldap_serv_password_e:
crypter.encrypt(settings.config_ldap_serv_password.encode())})
session.commit()
def _migrate_table(session, orm_class, secret_key=None):
if secret_key:
_encrypt_fields(session, secret_key)
def _migrate_table(session, orm_class):
changed = False
for column_name, column in orm_class.__dict__.items():
@ -457,9 +381,9 @@ def _migrate_table(session, orm_class, secret_key=None):
else:
column_type = column.type
alter_table = text("ALTER TABLE %s ADD COLUMN `%s` %s %s" % (orm_class.__tablename__,
column_name,
column_type,
column_default))
column_name,
column_type,
column_default))
log.debug(alter_table)
session.execute(alter_table)
changed = True
@ -474,38 +398,19 @@ def _migrate_table(session, orm_class, secret_key=None):
session.rollback()
def autodetect_calibre_binaries():
def autodetect_calibre_binary():
if sys.platform == "win32":
calibre_path = ["C:\\program files\\calibre\\",
"C:\\program files(x86)\\calibre\\",
"C:\\program files(x86)\\calibre2\\",
"C:\\program files\\calibre2\\"]
calibre_path = ["C:\\program files\\calibre\\ebook-convert.exe",
"C:\\program files(x86)\\calibre\\ebook-convert.exe",
"C:\\program files(x86)\\calibre2\\ebook-convert.exe",
"C:\\program files\\calibre2\\ebook-convert.exe"]
else:
calibre_path = ["/opt/calibre/"]
calibre_path = ["/opt/calibre/ebook-convert"]
for element in calibre_path:
supported_binary_paths = [os.path.join(element, binary)
for binary in constants.SUPPORTED_CALIBRE_BINARIES.values()]
if all(os.path.isfile(binary_path) and os.access(binary_path, os.X_OK)
for binary_path in supported_binary_paths):
values = [process_wait([binary_path, "--version"],
pattern=r'\(calibre (.*)\)') for binary_path in supported_binary_paths]
if all(values):
version = values[0].group(1)
log.debug("calibre version %s", version)
return element
return ""
def autodetect_converter_binary(calibre_path):
if sys.platform == "win32":
converter_path = os.path.join(calibre_path, "ebook-convert.exe")
else:
converter_path = os.path.join(calibre_path, "ebook-convert")
if calibre_path and os.path.isfile(converter_path) and os.access(converter_path, os.X_OK):
return converter_path
if os.path.isfile(element) and os.access(element, os.X_OK):
return element
return ""
def autodetect_unrar_binary():
if sys.platform == "win32":
calibre_path = ["C:\\program files\\WinRar\\unRAR.exe",
@ -517,7 +422,6 @@ def autodetect_unrar_binary():
return element
return ""
def autodetect_kepubify_binary():
if sys.platform == "win32":
calibre_path = ["C:\\program files\\kepubify\\kepubify-windows-64Bit.exe",
@ -529,20 +433,21 @@ def autodetect_kepubify_binary():
return element
return ""
def _migrate_database(session, secret_key):
def _migrate_database(session):
# make sure the table is created, if it does not exist
_Base.metadata.create_all(session.bind)
_migrate_table(session, _Settings, secret_key)
_migrate_table(session, _Settings)
_migrate_table(session, _Flask_Settings)
def load_configuration(session, secret_key):
_migrate_database(session, secret_key)
def load_configuration(session):
_migrate_database(session)
if not session.query(_Settings).count():
session.add(_Settings())
session.commit()
conf = _ConfigSQL(session)
return conf
def get_flask_session_key(_session):
flask_settings = _session.query(_Flask_Settings).one_or_none()
@ -551,25 +456,3 @@ def get_flask_session_key(_session):
_session.add(flask_settings)
_session.commit()
return flask_settings.flask_session_key
def get_encryption_key(key_path):
key_file = os.path.join(key_path, ".key")
generate = True
error = ""
if os.path.exists(key_file) and os.path.getsize(key_file) > 32:
with open(key_file, "rb") as f:
key = f.read()
try:
urlsafe_b64decode(key)
generate = False
except ValueError:
pass
if generate:
key = Fernet.generate_key()
try:
with open(key_file, "wb") as f:
f.write(key)
except PermissionError as e:
error = e
return key, error

@ -23,9 +23,6 @@ from sqlalchemy import __version__ as sql_version
sqlalchemy_version2 = ([int(x) for x in sql_version.split('.')] >= [2, 0, 0])
# APP_MODE - production, development, or test
APP_MODE = os.environ.get('APP_MODE', 'production')
# if installed via pip this variable is set to true (empty file with name .HOMEDIR present)
HOME_CONFIG = os.path.isfile(os.path.join(os.path.dirname(os.path.abspath(__file__)), '.HOMEDIR'))
@ -34,16 +31,10 @@ UPDATER_AVAILABLE = True
# Base dir is parent of current file, necessary if called from different folder
BASE_DIR = os.path.abspath(os.path.join(os.path.dirname(os.path.abspath(__file__)), os.pardir))
# if executable file the files should be placed in the parent dir (parallel to the exe file)
STATIC_DIR = os.path.join(BASE_DIR, 'cps', 'static')
TEMPLATES_DIR = os.path.join(BASE_DIR, 'cps', 'templates')
TRANSLATIONS_DIR = os.path.join(BASE_DIR, 'cps', 'translations')
# Cache dir - use CACHE_DIR environment variable, otherwise use the default directory: cps/cache
DEFAULT_CACHE_DIR = os.path.join(BASE_DIR, 'cps', 'cache')
CACHE_DIR = os.environ.get('CACHE_DIR', DEFAULT_CACHE_DIR)
if HOME_CONFIG:
home_dir = os.path.join(os.path.expanduser("~"), ".calibre-web")
if not os.path.exists(home_dir):
@ -51,12 +42,7 @@ if HOME_CONFIG:
CONFIG_DIR = os.environ.get('CALIBRE_DBPATH', home_dir)
else:
CONFIG_DIR = os.environ.get('CALIBRE_DBPATH', BASE_DIR)
if getattr(sys, 'frozen', False):
CONFIG_DIR = os.path.abspath(os.path.join(CONFIG_DIR, os.pardir))
DEFAULT_SETTINGS_FILE = "app.db"
DEFAULT_GDRIVE_FILE = "gdrive.db"
ROLE_USER = 0 << 0
ROLE_ADMIN = 1 << 0
@ -149,18 +135,13 @@ del env_CALIBRE_PORT
EXTENSIONS_AUDIO = {'mp3', 'mp4', 'ogg', 'opus', 'wav', 'flac', 'm4a', 'm4b'}
EXTENSIONS_CONVERT_FROM = ['pdf', 'epub', 'mobi', 'azw3', 'docx', 'rtf', 'fb2', 'lit', 'lrf',
'txt', 'htmlz', 'rtf', 'odt', 'cbz', 'cbr', 'prc']
'txt', 'htmlz', 'rtf', 'odt', 'cbz', 'cbr']
EXTENSIONS_CONVERT_TO = ['pdf', 'epub', 'mobi', 'azw3', 'docx', 'rtf', 'fb2',
'lit', 'lrf', 'txt', 'htmlz', 'rtf', 'odt']
EXTENSIONS_UPLOAD = {'txt', 'pdf', 'epub', 'kepub', 'mobi', 'azw', 'azw3', 'cbr', 'cbz', 'cbt', 'cb7', 'djvu', 'djv',
EXTENSIONS_UPLOAD = {'txt', 'pdf', 'epub', 'kepub', 'mobi', 'azw', 'azw3', 'cbr', 'cbz', 'cbt', 'djvu',
'prc', 'doc', 'docx', 'fb2', 'html', 'rtf', 'lit', 'odt', 'mp3', 'mp4', 'ogg',
'opus', 'wav', 'flac', 'm4a', 'm4b'}
_extension = ""
if sys.platform == "win32":
_extension = ".exe"
SUPPORTED_CALIBRE_BINARIES = {binary:binary + _extension for binary in ["ebook-convert", "calibredb"]}
def has_flag(value, bit_flag):
return bit_flag == (bit_flag & (value or 0))
@ -171,28 +152,16 @@ def selected_roles(dictionary):
# :rtype: BookMeta
BookMeta = namedtuple('BookMeta', 'file_path, extension, title, author, cover, description, tags, series, '
'series_id, languages, publisher, pubdate, identifiers')
'series_id, languages, publisher')
# python build process likes to have x.y.zbw -> b for beta and w a counting number
STABLE_VERSION = {'version': '0.6.22b'}
STABLE_VERSION = {'version': '0.6.17'}
NIGHTLY_VERSION = dict()
NIGHTLY_VERSION[0] = '$Format:%H$'
NIGHTLY_VERSION[1] = '$Format:%cI$'
# NIGHTLY_VERSION[0] = 'bb7d2c6273ae4560e83950d36d64533343623a57'
# NIGHTLY_VERSION[1] = '2018-09-09T10:13:08+02:00'
# CACHE
CACHE_TYPE_THUMBNAILS = 'thumbnails'
# Thumbnail Types
THUMBNAIL_TYPE_COVER = 1
THUMBNAIL_TYPE_SERIES = 2
THUMBNAIL_TYPE_AUTHOR = 3
# Thumbnails Sizes
COVER_THUMBNAIL_ORIGINAL = 0
COVER_THUMBNAIL_SMALL = 1
COVER_THUMBNAIL_MEDIUM = 2
COVER_THUMBNAIL_LARGE = 3
# clean-up the module namespace
del sys, os, namedtuple

@ -18,8 +18,7 @@
import os
import re
from flask_babel import lazy_gettext as N_
from flask_babel import gettext as _
from . import config, logger
from .subproc_wrapper import process_wait
@ -27,9 +26,9 @@ from .subproc_wrapper import process_wait
log = logger.create()
# strings getting translated when used
_NOT_INSTALLED = N_('not installed')
_EXECUTION_ERROR = N_('Execution permissions missing')
# _() necessary to make babel aware of string for translation
_NOT_INSTALLED = _('not installed')
_EXECUTION_ERROR = _('Execution permissions missing')
def _get_command_version(path, pattern, argument=None):
@ -54,9 +53,10 @@ def get_calibre_version():
def get_unrar_version():
unrar_version = _get_command_version(config.config_rarfile_location, r'UNRAR.*\d')
if unrar_version == "not installed":
unrar_version = _get_command_version(config.config_rarfile_location, r'unrar.*\d', '-V')
unrar_version = _get_command_version(config.config_rarfile_location, r'unrar.*\d','-V')
return unrar_version
def get_kepubify_version():
return _get_command_version(config.config_kepubifypath, r'kepubify\s', '--version')
return _get_command_version(config.config_kepubifypath, r'kepubify\s','--version')

@ -1,48 +0,0 @@
# -*- coding: utf-8 -*-
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
# Copyright (C) 2022 OzzieIsaacs
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import os
try:
from wand.image import Image
use_IM = True
except (ImportError, RuntimeError) as e:
use_IM = False
NO_JPEG_EXTENSIONS = ['.png', '.webp', '.bmp']
COVER_EXTENSIONS = ['.png', '.webp', '.bmp', '.jpg', '.jpeg']
def cover_processing(tmp_file_name, img, extension):
tmp_cover_name = os.path.join(os.path.dirname(tmp_file_name), 'cover.jpg')
if extension in NO_JPEG_EXTENSIONS:
if use_IM:
with Image(blob=img) as imgc:
imgc.format = 'jpeg'
imgc.transform_colorspace('rgb')
imgc.save(filename=tmp_cover_name)
return tmp_cover_name
else:
return None
if img:
with open(tmp_cover_name, 'wb') as f:
f.write(img)
return tmp_cover_name
else:
return None

@ -17,14 +17,14 @@
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import sys
import os
import re
import ast
import json
from datetime import datetime
from urllib.parse import quote
import unidecode
from sqlite3 import OperationalError as sqliteOperationalError
from sqlalchemy import create_engine
from sqlalchemy import Table, Column, ForeignKey, CheckConstraint
from sqlalchemy import String, Integer, Boolean, TIMESTAMP, Float
@ -42,7 +42,6 @@ from sqlalchemy.sql.expression import and_, true, false, text, func, or_
from sqlalchemy.ext.associationproxy import association_proxy
from flask_login import current_user
from flask_babel import gettext as _
from flask_babel import get_locale
from flask import flash
from . import logger, ub, isoLanguages
@ -50,6 +49,11 @@ from .pagination import Pagination
from weakref import WeakSet
try:
import unidecode
use_unidecode = True
except ImportError:
use_unidecode = False
log = logger.create()
@ -108,80 +112,69 @@ class Identifiers(Base):
self.type = id_type
self.book = book
def format_type(self):
def formatType(self):
format_type = self.type.lower()
if format_type == 'amazon':
return "Amazon"
return u"Amazon"
elif format_type.startswith("amazon_"):
return "Amazon.{0}".format(format_type[7:])
return u"Amazon.{0}".format(format_type[7:])
elif format_type == "isbn":
return "ISBN"
return u"ISBN"
elif format_type == "doi":
return "DOI"
return u"DOI"
elif format_type == "douban":
return "Douban"
return u"Douban"
elif format_type == "goodreads":
return "Goodreads"
return u"Goodreads"
elif format_type == "babelio":
return "Babelio"
return u"Babelio"
elif format_type == "google":
return "Google Books"
return u"Google Books"
elif format_type == "kobo":
return "Kobo"
elif format_type == "barnesnoble":
return "Barnes & Noble"
return u"Kobo"
elif format_type == "litres":
return "ЛитРес"
return u"ЛитРес"
elif format_type == "issn":
return "ISSN"
return u"ISSN"
elif format_type == "isfdb":
return "ISFDB"
return u"ISFDB"
if format_type == "lubimyczytac":
return "Lubimyczytac"
if format_type == "databazeknih":
return "Databáze knih"
return u"Lubimyczytac"
else:
return self.type
def __repr__(self):
format_type = self.type.lower()
if format_type == "amazon" or format_type == "asin":
return "https://amazon.com/dp/{0}".format(self.val)
return u"https://amazon.com/dp/{0}".format(self.val)
elif format_type.startswith('amazon_'):
return "https://amazon.{0}/dp/{1}".format(format_type[7:], self.val)
return u"https://amazon.{0}/dp/{1}".format(format_type[7:], self.val)
elif format_type == "isbn":
return "https://www.worldcat.org/isbn/{0}".format(self.val)
return u"https://www.worldcat.org/isbn/{0}".format(self.val)
elif format_type == "doi":
return "https://dx.doi.org/{0}".format(self.val)
return u"https://dx.doi.org/{0}".format(self.val)
elif format_type == "goodreads":
return "https://www.goodreads.com/book/show/{0}".format(self.val)
return u"https://www.goodreads.com/book/show/{0}".format(self.val)
elif format_type == "babelio":
return "https://www.babelio.com/livres/titre/{0}".format(self.val)
return u"https://www.babelio.com/livres/titre/{0}".format(self.val)
elif format_type == "douban":
return "https://book.douban.com/subject/{0}".format(self.val)
return u"https://book.douban.com/subject/{0}".format(self.val)
elif format_type == "google":
return "https://books.google.com/books?id={0}".format(self.val)
return u"https://books.google.com/books?id={0}".format(self.val)
elif format_type == "kobo":
return "https://www.kobo.com/ebook/{0}".format(self.val)
elif format_type == "barnesnoble":
return "https://www.barnesandnoble.com/w/{0}".format(self.val)
return u"https://www.kobo.com/ebook/{0}".format(self.val)
elif format_type == "lubimyczytac":
return "https://lubimyczytac.pl/ksiazka/{0}/ksiazka".format(self.val)
return u"https://lubimyczytac.pl/ksiazka/{0}/ksiazka".format(self.val)
elif format_type == "litres":
return "https://www.litres.ru/{0}".format(self.val)
return u"https://www.litres.ru/{0}".format(self.val)
elif format_type == "issn":
return "https://portal.issn.org/resource/ISSN/{0}".format(self.val)
return u"https://portal.issn.org/resource/ISSN/{0}".format(self.val)
elif format_type == "isfdb":
return "http://www.isfdb.org/cgi-bin/pl.cgi?{0}".format(self.val)
elif format_type == "databazeknih":
return "https://www.databazeknih.cz/knihy/{0}".format(self.val)
return u"http://www.isfdb.org/cgi-bin/pl.cgi?{0}".format(self.val)
elif self.val.lower().startswith("javascript:"):
return quote(self.val)
elif self.val.lower().startswith("data:"):
link , __, __ = str.partition(self.val, ",")
return link
else:
return "{0}".format(self.val)
return u"{0}".format(self.val)
class Comments(Base):
@ -191,15 +184,15 @@ class Comments(Base):
book = Column(Integer, ForeignKey('books.id'), nullable=False, unique=True)
text = Column(String(collation='NOCASE'), nullable=False)
def __init__(self, comment, book):
self.text = comment
def __init__(self, text, book):
self.text = text
self.book = book
def get(self):
return self.text
def __repr__(self):
return "<Comments({0})>".format(self.text)
return u"<Comments({0})>".format(self.text)
class Tags(Base):
@ -214,11 +207,8 @@ class Tags(Base):
def get(self):
return self.name
def __eq__(self, other):
return self.name == other
def __repr__(self):
return "<Tags('{0})>".format(self.name)
return u"<Tags('{0})>".format(self.name)
class Authors(Base):
@ -229,7 +219,7 @@ class Authors(Base):
sort = Column(String(collation='NOCASE'))
link = Column(String, nullable=False, default="")
def __init__(self, name, sort, link=""):
def __init__(self, name, sort, link):
self.name = name
self.sort = sort
self.link = link
@ -237,11 +227,8 @@ class Authors(Base):
def get(self):
return self.name
def __eq__(self, other):
return self.name == other
def __repr__(self):
return "<Authors('{0},{1}{2}')>".format(self.name, self.sort, self.link)
return u"<Authors('{0},{1}{2}')>".format(self.name, self.sort, self.link)
class Series(Base):
@ -258,11 +245,8 @@ class Series(Base):
def get(self):
return self.name
def __eq__(self, other):
return self.name == other
def __repr__(self):
return "<Series('{0},{1}')>".format(self.name, self.sort)
return u"<Series('{0},{1}')>".format(self.name, self.sort)
class Ratings(Base):
@ -277,11 +261,8 @@ class Ratings(Base):
def get(self):
return self.rating
def __eq__(self, other):
return self.rating == other
def __repr__(self):
return "<Ratings('{0}')>".format(self.rating)
return u"<Ratings('{0}')>".format(self.rating)
class Languages(Base):
@ -294,16 +275,13 @@ class Languages(Base):
self.lang_code = lang_code
def get(self):
if hasattr(self, "language_name"):
if self.language_name:
return self.language_name
else:
return self.lang_code
def __eq__(self, other):
return self.lang_code == other
def __repr__(self):
return "<Languages('{0}')>".format(self.lang_code)
return u"<Languages('{0}')>".format(self.lang_code)
class Publishers(Base):
@ -320,11 +298,8 @@ class Publishers(Base):
def get(self):
return self.name
def __eq__(self, other):
return self.name == other
def __repr__(self):
return "<Publishers('{0},{1}')>".format(self.name, self.sort)
return u"<Publishers('{0},{1}')>".format(self.name, self.sort)
class Data(Base):
@ -348,16 +323,7 @@ class Data(Base):
return self.name
def __repr__(self):
return "<Data('{0},{1}{2}{3}')>".format(self.book, self.format, self.uncompressed_size, self.name)
class Metadata_Dirtied(Base):
__tablename__ = 'metadata_dirtied'
id = Column(Integer, primary_key=True, autoincrement=True)
book = Column(Integer, ForeignKey('books.id'), nullable=False, unique=True)
def __init__(self, book):
self.book = book
return u"<Data('{0},{1}{2}{3}')>".format(self.book, self.format, self.uncompressed_size, self.name)
class Books(Base):
@ -401,17 +367,18 @@ class Books(Base):
self.path = path
self.has_cover = (has_cover != None)
def __repr__(self):
return "<Books('{0},{1}{2}{3}{4}{5}{6}{7}{8}')>".format(self.title, self.sort, self.author_sort,
return u"<Books('{0},{1}{2}{3}{4}{5}{6}{7}{8}')>".format(self.title, self.sort, self.author_sort,
self.timestamp, self.pubdate, self.series_index,
self.last_modified, self.path, self.has_cover)
@property
def atom_timestamp(self):
return self.timestamp.strftime('%Y-%m-%dT%H:%M:%S+00:00') or ''
return (self.timestamp.strftime('%Y-%m-%dT%H:%M:%S+00:00') or '')
class CustomColumns(Base):
class Custom_Columns(Base):
__tablename__ = 'custom_columns'
id = Column(Integer, primary_key=True)
@ -425,36 +392,9 @@ class CustomColumns(Base):
normalized = Column(Boolean)
def get_display_dict(self):
display_dict = json.loads(self.display)
display_dict = ast.literal_eval(self.display)
return display_dict
def to_json(self, value, extra, sequence):
content = dict()
content['table'] = "custom_column_" + str(self.id)
content['column'] = "value"
content['datatype'] = self.datatype
content['is_multiple'] = None if not self.is_multiple else "|"
content['kind'] = "field"
content['name'] = self.name
content['search_terms'] = ['#' + self.label]
content['label'] = self.label
content['colnum'] = self.id
content['display'] = self.get_display_dict()
content['is_custom'] = True
content['is_category'] = self.datatype in ['text', 'rating', 'enumeration', 'series']
content['link_column'] = "value"
content['category_sort'] = "value"
content['is_csp'] = False
content['is_editable'] = self.editable
content['rec_index'] = sequence + 22 # toDo why ??
if isinstance(value, datetime):
content['#value#'] = {"__class__": "datetime.datetime", "__value__": value.strftime("%Y-%m-%dT%H:%M:%S+00:00")}
else:
content['#value#'] = value
content['#extra#'] = extra
content['is_multiple2'] = {} if not self.is_multiple else {"cache_to_list": "|", "ui_to_list": ",", "list_to_ui": ", "}
return json.dumps(content, ensure_ascii=False)
class AlchemyEncoder(json.JSONEncoder):
@ -496,7 +436,7 @@ class AlchemyEncoder(json.JSONEncoder):
return json.JSONEncoder.default(self, o)
class CalibreDB:
class CalibreDB():
_init = False
engine = None
config = None
@ -505,27 +445,22 @@ class CalibreDB:
# instances alive once they reach the end of their respective scopes
instances = WeakSet()
def __init__(self, expire_on_commit=True, init=False):
def __init__(self, expire_on_commit=True):
""" Initialize a new CalibreDB session
"""
self.session = None
if init:
self.init_db(expire_on_commit)
def init_db(self, expire_on_commit=True):
if self._init:
self.init_session(expire_on_commit)
self.initSession(expire_on_commit)
self.instances.add(self)
def init_session(self, expire_on_commit=True):
def initSession(self, expire_on_commit=True):
self.session = self.session_factory()
self.session.expire_on_commit = expire_on_commit
self.update_title_sort(self.config)
@classmethod
def setup_db_cc_classes(cls, cc):
def setup_db_cc_classes(self, cc):
cc_ids = []
books_custom_column_links = {}
for row in cc:
@ -604,10 +539,10 @@ class CalibreDB:
return False, False
try:
check_engine = create_engine('sqlite://',
echo=False,
isolation_level="SERIALIZABLE",
connect_args={'check_same_thread': False},
poolclass=StaticPool)
echo=False,
isolation_level="SERIALIZABLE",
connect_args={'check_same_thread': False},
poolclass=StaticPool)
with check_engine.begin() as connection:
connection.execute(text("attach database '{}' as calibre;".format(dbpath)))
connection.execute(text("attach database '{}' as app_settings;".format(app_db_path)))
@ -632,12 +567,12 @@ class CalibreDB:
if not config_calibre_dir:
cls.config.invalidate()
return None
return False
dbpath = os.path.join(config_calibre_dir, "metadata.db")
if not os.path.exists(dbpath):
cls.config.invalidate()
return None
return False
try:
cls.engine = create_engine('sqlite://',
@ -653,7 +588,7 @@ class CalibreDB:
# conn.text_factory = lambda b: b.decode(errors = 'ignore') possible fix for #1302
except Exception as ex:
cls.config.invalidate(ex)
return None
return False
cls.config.db_configured = True
@ -662,16 +597,16 @@ class CalibreDB:
cc = conn.execute(text("SELECT id, datatype FROM custom_columns"))
cls.setup_db_cc_classes(cc)
except OperationalError as e:
log.error_or_exception(e)
return None
log.debug_or_exception(e)
cls.session_factory = scoped_session(sessionmaker(autocommit=False,
autoflush=True,
bind=cls.engine, future=True))
bind=cls.engine))
for inst in cls.instances:
inst.init_session()
inst.initSession()
cls._init = True
return True
def get_book(self, book_id):
return self.session.query(Books).filter(Books.id == book_id).first()
@ -691,8 +626,8 @@ class CalibreDB:
bd = (self.session.query(Books, read_column.value, ub.ArchivedBook.is_archived).select_from(Books)
.join(read_column, read_column.book == book_id,
isouter=True))
except (KeyError, AttributeError, IndexError):
log.error("Custom Column No.{} does not exist in calibre database".format(read_column))
except (KeyError, AttributeError):
log.error("Custom Column No.%d is not existing in calibre database", read_column)
# Skip linking read column and return None instead of read status
bd = self.session.query(Books, None, ub.ArchivedBook.is_archived)
return (bd.filter(Books.id == book_id)
@ -706,25 +641,15 @@ class CalibreDB:
def get_book_format(self, book_id, file_format):
return self.session.query(Data).filter(Data.book == book_id).filter(Data.format == file_format).first()
def set_metadata_dirty(self, book_id):
if not self.session.query(Metadata_Dirtied).filter(Metadata_Dirtied.book == book_id).one_or_none():
self.session.add(Metadata_Dirtied(book_id))
def delete_dirty_metadata(self, book_id):
try:
self.session.query(Metadata_Dirtied).filter(Metadata_Dirtied.book == book_id).delete()
self.session.commit()
except (OperationalError) as e:
self.session.rollback()
log.error("Database error: {}".format(e))
# Language and content filters for displaying in the UI
def common_filters(self, allow_show_archived=False, return_all_languages=False):
if not allow_show_archived:
archived_books = (ub.session.query(ub.ArchivedBook)
.filter(ub.ArchivedBook.user_id == int(current_user.id))
.filter(ub.ArchivedBook.is_archived == True)
.all())
archived_books = (
ub.session.query(ub.ArchivedBook)
.filter(ub.ArchivedBook.user_id == int(current_user.id))
.filter(ub.ArchivedBook.is_archived == True)
.all()
)
archived_book_ids = [archived_book.book_id for archived_book in archived_books]
archived_filter = Books.id.notin_(archived_book_ids)
else:
@ -743,17 +668,17 @@ class CalibreDB:
pos_cc_list = current_user.allowed_column_value.split(',')
pos_content_cc_filter = true() if pos_cc_list == [''] else \
getattr(Books, 'custom_column_' + str(self.config.config_restricted_column)). \
any(cc_classes[self.config.config_restricted_column].value.in_(pos_cc_list))
any(cc_classes[self.config.config_restricted_column].value.in_(pos_cc_list))
neg_cc_list = current_user.denied_column_value.split(',')
neg_content_cc_filter = false() if neg_cc_list == [''] else \
getattr(Books, 'custom_column_' + str(self.config.config_restricted_column)). \
any(cc_classes[self.config.config_restricted_column].value.in_(neg_cc_list))
except (KeyError, AttributeError, IndexError):
any(cc_classes[self.config.config_restricted_column].value.in_(neg_cc_list))
except (KeyError, AttributeError):
pos_content_cc_filter = false()
neg_content_cc_filter = true()
log.error("Custom Column No.{} does not exist in calibre database".format(
self.config.config_restricted_column))
flash(_("Custom Column No.%(column)d does not exist in calibre database",
log.error(u"Custom Column No.%d is not existing in calibre database",
self.config.config_restricted_column)
flash(_("Custom Column No.%(column)d is not existing in calibre database",
column=self.config.config_restricted_column),
category="error")
@ -763,25 +688,6 @@ class CalibreDB:
return and_(lang_filter, pos_content_tags_filter, ~neg_content_tags_filter,
pos_content_cc_filter, ~neg_content_cc_filter, archived_filter)
def generate_linked_query(self, config_read_column, database):
if not config_read_column:
query = (self.session.query(database, ub.ArchivedBook.is_archived, ub.ReadBook.read_status)
.select_from(Books)
.outerjoin(ub.ReadBook,
and_(ub.ReadBook.user_id == int(current_user.id), ub.ReadBook.book_id == Books.id)))
else:
try:
read_column = cc_classes[config_read_column]
query = (self.session.query(database, ub.ArchivedBook.is_archived, read_column.value)
.select_from(Books)
.outerjoin(read_column, read_column.book == Books.id))
except (KeyError, AttributeError, IndexError):
log.error("Custom Column No.{} does not exist in calibre database".format(config_read_column))
# Skip linking read column and return None instead of read status
query = self.session.query(database, None, ub.ArchivedBook.is_archived)
return query.outerjoin(ub.ArchivedBook, and_(Books.id == ub.ArchivedBook.book_id,
int(current_user.id) == ub.ArchivedBook.user_id))
@staticmethod
def get_checkbox_sorted(inputlist, state, offset, limit, order, combo=False):
outcome = list()
@ -811,14 +717,30 @@ class CalibreDB:
join_archive_read, config_read_column, *join):
pagesize = pagesize or self.config.config_books_per_page
if current_user.show_detail_random():
random_query = self.generate_linked_query(config_read_column, database)
randm = (random_query.filter(self.common_filters(allow_show_archived))
.order_by(func.random())
.limit(self.config.config_random_books).all())
randm = self.session.query(Books) \
.filter(self.common_filters(allow_show_archived)) \
.order_by(func.random()) \
.limit(self.config.config_random_books).all()
else:
randm = false()
if join_archive_read:
query = self.generate_linked_query(config_read_column, database)
if not config_read_column:
query = (self.session.query(database, ub.ReadBook.read_status, ub.ArchivedBook.is_archived)
.select_from(Books)
.outerjoin(ub.ReadBook,
and_(ub.ReadBook.user_id == int(current_user.id), ub.ReadBook.book_id == Books.id)))
else:
try:
read_column = cc_classes[config_read_column]
query = (self.session.query(database, read_column.value, ub.ArchivedBook.is_archived)
.select_from(Books)
.outerjoin(read_column, read_column.book == Books.id))
except (KeyError, AttributeError):
log.error("Custom Column No.%d is not existing in calibre database", read_column)
# Skip linking read column and return None instead of read status
query =self.session.query(database, None, ub.ArchivedBook.is_archived)
query = query.outerjoin(ub.ArchivedBook, and_(Books.id == ub.ArchivedBook.book_id,
int(current_user.id) == ub.ArchivedBook.user_id))
else:
query = self.session.query(database)
off = int(int(pagesize) * (page - 1))
@ -843,10 +765,11 @@ class CalibreDB:
entries = list()
pagination = list()
try:
pagination = Pagination(page, pagesize, query.count())
pagination = Pagination(page, pagesize,
len(query.all()))
entries = query.order_by(*order).offset(off).limit(pagesize).all()
except Exception as ex:
log.error_or_exception(ex)
log.debug_or_exception(ex)
# display authors in right order
entries = self.order_authors(entries, True, join_archive_read)
return entries, randm, pagination
@ -862,30 +785,25 @@ class CalibreDB:
sort_authors = entry.author_sort.split('&')
ids = [a.id for a in entry.authors]
authors_ordered = list()
# error = False
error = False
for auth in sort_authors:
results = self.session.query(Authors).filter(Authors.sort == auth.lstrip().strip()).all()
# ToDo: How to handle not found author name
# ToDo: How to handle not found authorname
if not len(results):
log.error("Author {} not found to display name in right order".format(auth.strip()))
# error = True
error = True
break
for r in results:
if r.id in ids:
authors_ordered.append(r)
ids.remove(r.id)
for author_id in ids:
result = self.session.query(Authors).filter(Authors.id == author_id).first()
authors_ordered.append(result)
if list_return:
if not error:
if combined:
entry.Books.authors = authors_ordered
else:
entry.ordered_authors = authors_ordered
else:
return authors_ordered
return entries
entry.authors = authors_ordered
if list_return:
return entries
else:
return authors_ordered
def get_typeahead(self, database, query, replace=('', ''), tag_filter=true()):
query = query or ''
@ -899,21 +817,36 @@ class CalibreDB:
def check_exists_book(self, authr, title):
self.session.connection().connection.connection.create_function("lower", 1, lcase)
q = list()
author_terms = re.split(r'\s*&\s*', authr)
for author_term in author_terms:
q.append(Books.authors.any(func.lower(Authors.name).ilike("%" + author_term + "%")))
authorterms = re.split(r'\s*&\s*', authr)
for authorterm in authorterms:
q.append(Books.authors.any(func.lower(Authors.name).ilike("%" + authorterm + "%")))
return self.session.query(Books) \
.filter(and_(Books.authors.any(and_(*q)), func.lower(Books.title).ilike("%" + title + "%"))).first()
def search_query(self, term, config, *join):
def search_query(self, term, config_read_column, *join):
term.strip().lower()
self.session.connection().connection.connection.create_function("lower", 1, lcase)
q = list()
author_terms = re.split("[, ]+", term)
for author_term in author_terms:
q.append(Books.authors.any(func.lower(Authors.name).ilike("%" + author_term + "%")))
query = self.generate_linked_query(config.config_read_column, Books)
authorterms = re.split("[, ]+", term)
for authorterm in authorterms:
q.append(Books.authors.any(func.lower(Authors.name).ilike("%" + authorterm + "%")))
if not config_read_column:
query = (self.session.query(Books, ub.ArchivedBook.is_archived, ub.ReadBook).select_from(Books)
.outerjoin(ub.ReadBook, and_(Books.id == ub.ReadBook.book_id,
int(current_user.id) == ub.ReadBook.user_id)))
else:
try:
read_column = cc_classes[config_read_column]
query = (self.session.query(Books, ub.ArchivedBook.is_archived, read_column.value).select_from(Books)
.outerjoin(read_column, read_column.book == Books.id))
except (KeyError, AttributeError):
log.error("Custom Column No.%d is not existing in calibre database", config_read_column)
# Skip linking read column
query = self.session.query(Books, ub.ArchivedBook.is_archived, None)
query = query.outerjoin(ub.ArchivedBook, and_(Books.id == ub.ArchivedBook.book_id,
int(current_user.id) == ub.ArchivedBook.user_id))
if len(join) == 6:
query = query.outerjoin(join[0], join[1]).outerjoin(join[2]).outerjoin(join[3], join[4]).outerjoin(join[5])
if len(join) == 3:
@ -922,42 +855,20 @@ class CalibreDB:
query = query.outerjoin(join[0], join[1])
elif len(join) == 1:
query = query.outerjoin(join[0])
cc = self.get_cc_columns(config, filter_config_custom_read=True)
filter_expression = [Books.tags.any(func.lower(Tags.name).ilike("%" + term + "%")),
Books.series.any(func.lower(Series.name).ilike("%" + term + "%")),
Books.authors.any(and_(*q)),
Books.publishers.any(func.lower(Publishers.name).ilike("%" + term + "%")),
func.lower(Books.title).ilike("%" + term + "%")]
for c in cc:
if c.datatype not in ["datetime", "rating", "bool", "int", "float"]:
filter_expression.append(
getattr(Books,
'custom_column_' + str(c.id)).any(
func.lower(cc_classes[c.id].value).ilike("%" + term + "%")))
return query.filter(self.common_filters(True)).filter(or_(*filter_expression))
def get_cc_columns(self, config, filter_config_custom_read=False):
tmp_cc = self.session.query(CustomColumns).filter(CustomColumns.datatype.notin_(cc_exceptions)).all()
cc = []
r = None
if config.config_columns_to_ignore:
r = re.compile(config.config_columns_to_ignore)
for col in tmp_cc:
if filter_config_custom_read and config.config_read_column and config.config_read_column == col.id:
continue
if r and r.match(col.name):
continue
cc.append(col)
return cc
return query.filter(self.common_filters(True)).filter(
or_(Books.tags.any(func.lower(Tags.name).ilike("%" + term + "%")),
Books.series.any(func.lower(Series.name).ilike("%" + term + "%")),
Books.authors.any(and_(*q)),
Books.publishers.any(func.lower(Publishers.name).ilike("%" + term + "%")),
func.lower(Books.title).ilike("%" + term + "%")
))
# read search results from calibre-database and return it (function is used for feed and simple search
def get_search_results(self, term, config, offset=None, order=None, limit=None, *join):
def get_search_results(self, term, offset=None, order=None, limit=None, allow_show_archived=False,
config_read_column=False, *join):
order = order[0] if order else [Books.sort]
pagination = None
result = self.search_query(term, config, *join).order_by(*order).all()
result = self.search_query(term, config_read_column, *join).order_by(*order).all()
result_count = len(result)
if offset != None and limit != None:
offset = int(offset)
@ -974,6 +885,7 @@ class CalibreDB:
# Creates for all stored languages a translated speaking name in the array for the UI
def speaking_language(self, languages=None, return_all_languages=False, with_count=False, reverse_order=False):
from . import get_locale
if with_count:
if not languages:
@ -981,20 +893,9 @@ class CalibreDB:
.join(books_languages_link).join(Books)\
.filter(self.common_filters(return_all_languages=return_all_languages)) \
.group_by(text('books_languages_link.lang_code')).all()
tags = list()
for lang in languages:
tag = Category(isoLanguages.get_language_name(get_locale(), lang[0].lang_code), lang[0].lang_code)
tags.append([tag, lang[1]])
# Append all books without language to list
if not return_all_languages:
no_lang_count = (self.session.query(Books)
.outerjoin(books_languages_link).outerjoin(Languages)
.filter(Languages.lang_code == None)
.filter(self.common_filters())
.count())
if no_lang_count:
tags.append([Category(_("None"), "none"), no_lang_count])
return sorted(tags, key=lambda x: x[0].name.lower(), reverse=reverse_order)
lang[0].name = isoLanguages.get_language_name(get_locale(), lang[0].lang_code)
return sorted(languages, key=lambda x: x[0].name, reverse=reverse_order)
else:
if not languages:
languages = self.session.query(Languages) \
@ -1006,6 +907,7 @@ class CalibreDB:
lang.name = isoLanguages.get_language_name(get_locale(), lang.lang_code)
return sorted(languages, key=lambda x: x.name, reverse=reverse_order)
def update_title_sort(self, config, conn=None):
# user defined sort function for calibre databases (Series, etc.)
def _title_sort(title):
@ -1017,16 +919,8 @@ class CalibreDB:
title = title[len(prep):] + ', ' + prep
return title.strip()
try:
# sqlalchemy <1.4.24
conn = conn or self.session.connection().connection.driver_connection
except AttributeError:
# sqlalchemy >1.4.24 and sqlalchemy 2.0
conn = conn or self.session.connection().connection.connection
try:
conn.create_function("title_sort", 1, _title_sort)
except sqliteOperationalError:
pass
conn = conn or self.session.connection().connection.connection
conn.create_function("title_sort", 1, _title_sort)
@classmethod
def dispose(cls):
@ -1071,25 +965,6 @@ def lcase(s):
try:
return unidecode.unidecode(s.lower())
except Exception as ex:
_log = logger.create()
_log.error_or_exception(ex)
log = logger.create()
log.debug_or_exception(ex)
return s.lower()
class Category:
name = None
id = None
count = None
rating = None
def __init__(self, name, cat_id, rating=None):
self.name = name
self.id = cat_id
self.rating = rating
self.count = 1
'''class Count:
count = None
def __init__(self, count):
self.count = count'''

@ -22,7 +22,6 @@ import glob
import zipfile
import json
from io import BytesIO
from flask_babel.speaklater import LazyString
import os
@ -33,13 +32,6 @@ from .about import collect_stats
log = logger.create()
class lazyEncoder(json.JSONEncoder):
def default(self, obj):
if isinstance(obj, LazyString):
return str(obj)
# Let the base class default method raise the TypeError
return json.JSONEncoder.default(self, obj)
def assemble_logfiles(file_name):
log_list = sorted(glob.glob(file_name + '*'), reverse=True)
wfd = BytesIO()
@ -65,8 +57,8 @@ def send_debug():
file_list.remove(element)
memory_zip = BytesIO()
with zipfile.ZipFile(memory_zip, 'w', compression=zipfile.ZIP_DEFLATED) as zf:
zf.writestr('settings.txt', json.dumps(config.to_dict(), sort_keys=True, indent=2))
zf.writestr('libs.txt', json.dumps(collect_stats(), sort_keys=True, indent=2, cls=lazyEncoder))
zf.writestr('settings.txt', json.dumps(config.toDict()))
zf.writestr('libs.txt', json.dumps(collect_stats()))
for fp in file_list:
zf.write(fp, os.path.basename(fp))
memory_zip.seek(0)

@ -5,7 +5,7 @@ import json
from .constants import BASE_DIR
try:
from importlib.metadata import version
from importlib_metadata import version
importlib = True
ImportNotFound = BaseException
except ImportError:
@ -20,8 +20,7 @@ if not importlib:
except ImportError as e:
pkgresources = False
def load_dependencies(optional=False):
def load_dependencys(optional=False):
deps = list()
if getattr(sys, 'frozen', False):
pip_installed = os.path.join(BASE_DIR, ".pip_installed")
@ -42,7 +41,7 @@ def load_dependencies(optional=False):
res = re.match(r'(.*?)([<=>\s]+)([\d\.]+),?\s?([<=>\s]+)?([\d\.]+)?', line.strip())
try:
if getattr(sys, 'frozen', False):
dep_version = exe_deps[res.group(1).lower().replace('_', '-')]
dep_version = exe_deps[res.group(1).lower().replace('_','-')]
else:
if importlib:
dep_version = version(res.group(1))
@ -58,38 +57,38 @@ def load_dependencies(optional=False):
def dependency_check(optional=False):
d = list()
deps = load_dependencies(optional)
deps = load_dependencys(optional)
for dep in deps:
try:
dep_version_int = [int(x) if x.isnumeric() else 0 for x in dep[0].split('.')]
dep_version_int = [int(x) for x in dep[0].split('.')]
low_check = [int(x) for x in dep[3].split('.')]
high_check = [int(x) for x in dep[5].split('.')]
except AttributeError:
high_check = []
high_check = None
except ValueError:
d.append({'name': dep[1],
'target': "available",
'found': "Not available"
})
'target': "available",
'found': "Not available"
})
continue
if dep[2].strip() == "==":
if dep_version_int != low_check:
d.append({'name': dep[1],
'found': dep[0],
"target": dep[2] + dep[3]})
'found': dep[0],
"target": dep[2] + dep[3]})
continue
elif dep[2].strip() == ">=":
if dep_version_int < low_check:
d.append({'name': dep[1],
'found': dep[0],
"target": dep[2] + dep[3]})
'found': dep[0],
"target": dep[2] + dep[3]})
continue
elif dep[2].strip() == ">":
if dep_version_int <= low_check:
d.append({'name': dep[1],
'found': dep[0],
"target": dep[2] + dep[3]})
'found': dep[0],
"target": dep[2] + dep[3]})
continue
if dep[4] and dep[5]:
if dep[4].strip() == "<":

File diff suppressed because it is too large Load Diff

@ -1,63 +0,0 @@
# -*- coding: utf-8 -*-
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
# Copyright (C) 2024 OzzieIsaacs
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
from uuid import uuid4
import os
from .file_helper import get_temp_dir
from .subproc_wrapper import process_open
from . import logger, config
from .constants import SUPPORTED_CALIBRE_BINARIES
log = logger.create()
def do_calibre_export(book_id, book_format):
try:
quotes = [3, 5, 7, 9]
tmp_dir = get_temp_dir()
calibredb_binarypath = get_calibre_binarypath("calibredb")
temp_file_name = str(uuid4())
my_env = os.environ.copy()
if config.config_calibre_split:
my_env['CALIBRE_OVERRIDE_DATABASE_PATH'] = os.path.join(config.config_calibre_dir, "metadata.db")
library_path = config.config_calibre_split_dir
else:
library_path = config.config_calibre_dir
opf_command = [calibredb_binarypath, 'export', '--dont-write-opf', '--with-library', library_path,
'--to-dir', tmp_dir, '--formats', book_format, "--template", "{}".format(temp_file_name),
str(book_id)]
p = process_open(opf_command, quotes, my_env)
_, err = p.communicate()
if err:
log.error('Metadata embedder encountered an error: %s', err)
return tmp_dir, temp_file_name
except OSError as ex:
# ToDo real error handling
log.error_or_exception(ex)
return None, None
def get_calibre_binarypath(binary):
binariesdir = config.config_binariesdir
if binariesdir:
try:
return os.path.join(binariesdir, SUPPORTED_CALIBRE_BINARIES[binary])
except KeyError as ex:
log.error("Binary not supported by Calibre-Web: %s", SUPPORTED_CALIBRE_BINARIES[binary])
pass
return ""

@ -20,48 +20,23 @@ import os
import zipfile
from lxml import etree
from . import isoLanguages, cover
from . import config, logger
from . import isoLanguages
from .helper import split_authors
from .epub_helper import get_content_opf, default_ns
from .constants import BookMeta
log = logger.create()
def _extract_cover(zip_file, cover_file, cover_path, tmp_file_name):
def extract_cover(zip_file, cover_file, cover_path, tmp_file_name):
if cover_file is None:
return None
cf = extension = None
zip_cover_path = os.path.join(cover_path, cover_file).replace('\\', '/')
prefix = os.path.splitext(tmp_file_name)[0]
tmp_cover_name = prefix + '.' + os.path.basename(zip_cover_path)
ext = os.path.splitext(tmp_cover_name)
if len(ext) > 1:
extension = ext[1].lower()
if extension in cover.COVER_EXTENSIONS:
cf = zip_file.read(zip_cover_path)
return cover.cover_processing(tmp_file_name, cf, extension)
def get_epub_layout(book, book_data):
file_path = os.path.normpath(os.path.join(config.get_book_path(),
book.path, book_data.name + "." + book_data.format.lower()))
try:
tree, __ = get_content_opf(file_path, default_ns)
p = tree.xpath('/pkg:package/pkg:metadata', namespaces=default_ns)[0]
layout = p.xpath('pkg:meta[@property="rendition:layout"]/text()', namespaces=default_ns)
except (etree.XMLSyntaxError, KeyError, IndexError, OSError) as e:
log.error("Could not parse epub metadata of book {} during kobo sync: {}".format(book.id, e))
layout = []
if len(layout) == 0:
return None
else:
return layout[0]
zip_cover_path = os.path.join(cover_path, cover_file).replace('\\', '/')
cf = zip_file.read(zip_cover_path)
prefix = os.path.splitext(tmp_file_name)[0]
tmp_cover_name = prefix + '.' + os.path.basename(zip_cover_path)
image = open(tmp_cover_name, 'wb')
image.write(cf)
image.close()
return tmp_cover_name
def get_epub_info(tmp_file_path, original_file_name, original_file_extension):
@ -71,38 +46,36 @@ def get_epub_info(tmp_file_path, original_file_name, original_file_extension):
'dc': 'http://purl.org/dc/elements/1.1/'
}
tree, cf_name = get_content_opf(tmp_file_path, ns)
epub_zip = zipfile.ZipFile(tmp_file_path)
cover_path = os.path.dirname(cf_name)
txt = epub_zip.read('META-INF/container.xml')
tree = etree.fromstring(txt)
cfname = tree.xpath('n:rootfiles/n:rootfile/@full-path', namespaces=ns)[0]
cf = epub_zip.read(cfname)
tree = etree.fromstring(cf)
coverpath = os.path.dirname(cfname)
p = tree.xpath('/pkg:package/pkg:metadata', namespaces=ns)[0]
epub_metadata = {}
for s in ['title', 'description', 'creator', 'language', 'subject', 'publisher', 'date']:
for s in ['title', 'description', 'creator', 'language', 'subject']:
tmp = p.xpath('dc:%s/text()' % s, namespaces=ns)
if len(tmp) > 0:
if s == 'creator':
epub_metadata[s] = ' & '.join(split_authors(tmp))
elif s == 'subject':
epub_metadata[s] = ', '.join(tmp)
elif s == 'date':
epub_metadata[s] = tmp[0][:10]
else:
epub_metadata[s] = tmp[0].strip()
epub_metadata[s] = tmp[0]
else:
epub_metadata[s] = 'Unknown'
epub_metadata[s] = u'Unknown'
if epub_metadata['subject'] == 'Unknown':
if epub_metadata['subject'] == u'Unknown':
epub_metadata['subject'] = ''
if epub_metadata['publisher'] == 'Unknown':
epub_metadata['publisher'] = ''
if epub_metadata['date'] == 'Unknown':
epub_metadata['date'] = ''
if epub_metadata['description'] == 'Unknown':
if epub_metadata['description'] == u'Unknown':
description = tree.xpath("//*[local-name() = 'description']/text()")
if len(description) > 0:
epub_metadata['description'] = description
@ -114,19 +87,7 @@ def get_epub_info(tmp_file_path, original_file_name, original_file_extension):
epub_metadata = parse_epub_series(ns, tree, epub_metadata)
epub_zip = zipfile.ZipFile(tmp_file_path)
cover_file = parse_epub_cover(ns, tree, epub_zip, cover_path, tmp_file_path)
identifiers = []
for node in p.xpath('dc:identifier', namespaces=ns):
try:
identifier_name = node.attrib.values()[-1]
except IndexError:
continue
identifier_value = node.text
if identifier_name in ('uuid', 'calibre') or identifier_value is None:
continue
identifiers.append([identifier_name, identifier_value])
cover_file = parse_epub_cover(ns, tree, epub_zip, coverpath, tmp_file_path)
if not epub_metadata['title']:
title = original_file_name
@ -144,47 +105,41 @@ def get_epub_info(tmp_file_path, original_file_name, original_file_extension):
series=epub_metadata['series'].encode('utf-8').decode('utf-8'),
series_id=epub_metadata['series_id'].encode('utf-8').decode('utf-8'),
languages=epub_metadata['language'],
publisher=epub_metadata['publisher'].encode('utf-8').decode('utf-8'),
pubdate=epub_metadata['date'],
identifiers=identifiers)
publisher="")
def parse_epub_cover(ns, tree, epub_zip, cover_path, tmp_file_path):
cover_section = tree.xpath("/pkg:package/pkg:manifest/pkg:item[@id='cover-image']/@href", namespaces=ns)
for cs in cover_section:
cover_file = _extract_cover(epub_zip, cs, cover_path, tmp_file_path)
if cover_file:
return cover_file
meta_cover = tree.xpath("/pkg:package/pkg:metadata/pkg:meta[@name='cover']/@content", namespaces=ns)
if len(meta_cover) > 0:
cover_section = tree.xpath(
"/pkg:package/pkg:manifest/pkg:item[@id='"+meta_cover[0]+"']/@href", namespaces=ns)
if not cover_section:
cover_section = tree.xpath(
"/pkg:package/pkg:manifest/pkg:item[@properties='" + meta_cover[0] + "']/@href", namespaces=ns)
else:
cover_section = tree.xpath("/pkg:package/pkg:guide/pkg:reference/@href", namespaces=ns)
cover_file = None
for cs in cover_section:
if cs.endswith('.xhtml') or cs.endswith('.html'):
markup = epub_zip.read(os.path.join(cover_path, cs))
markup_tree = etree.fromstring(markup)
# no matter xhtml or html with no namespace
img_src = markup_tree.xpath("//*[local-name() = 'img']/@src")
# Alternative image source
if not len(img_src):
img_src = markup_tree.xpath("//attribute::*[contains(local-name(), 'href')]")
if len(img_src):
# img_src maybe start with "../"" so fullpath join then relpath to cwd
filename = os.path.relpath(os.path.join(os.path.dirname(os.path.join(cover_path, cover_section[0])),
img_src[0]))
cover_file = _extract_cover(epub_zip, filename, "", tmp_file_path)
if len(cover_section) > 0:
cover_file = extract_cover(epub_zip, cover_section[0], cover_path, tmp_file_path)
else:
meta_cover = tree.xpath("/pkg:package/pkg:metadata/pkg:meta[@name='cover']/@content", namespaces=ns)
if len(meta_cover) > 0:
cover_section = tree.xpath(
"/pkg:package/pkg:manifest/pkg:item[@id='"+meta_cover[0]+"']/@href", namespaces=ns)
if not cover_section:
cover_section = tree.xpath(
"/pkg:package/pkg:manifest/pkg:item[@properties='" + meta_cover[0] + "']/@href", namespaces=ns)
else:
cover_file = _extract_cover(epub_zip, cs, cover_path, tmp_file_path)
if cover_file:
break
cover_section = tree.xpath("/pkg:package/pkg:guide/pkg:reference/@href", namespaces=ns)
if len(cover_section) > 0:
filetype = cover_section[0].rsplit('.', 1)[-1]
if filetype == "xhtml" or filetype == "html": # if cover is (x)html format
markup = epub_zip.read(os.path.join(cover_path, cover_section[0]))
markup_tree = etree.fromstring(markup)
# no matter xhtml or html with no namespace
img_src = markup_tree.xpath("//*[local-name() = 'img']/@src")
# Alternative image source
if not len(img_src):
img_src = markup_tree.xpath("//attribute::*[contains(local-name(), 'href')]")
if len(img_src):
# img_src maybe start with "../"" so fullpath join then relpath to cwd
filename = os.path.relpath(os.path.join(os.path.dirname(os.path.join(cover_path, cover_section[0])),
img_src[0]))
cover_file = extract_cover(epub_zip, filename, "", tmp_file_path)
else:
cover_file = extract_cover(epub_zip, cover_section[0], cover_path, tmp_file_path)
return cover_file

@ -1,166 +0,0 @@
# -*- coding: utf-8 -*-
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
# Copyright (C) 2018 lemmsh, Kennyl, Kyosfonica, matthazinski
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import zipfile
from lxml import etree
from . import isoLanguages
default_ns = {
'n': 'urn:oasis:names:tc:opendocument:xmlns:container',
'pkg': 'http://www.idpf.org/2007/opf',
}
OPF_NAMESPACE = "http://www.idpf.org/2007/opf"
PURL_NAMESPACE = "http://purl.org/dc/elements/1.1/"
OPF = "{%s}" % OPF_NAMESPACE
PURL = "{%s}" % PURL_NAMESPACE
etree.register_namespace("opf", OPF_NAMESPACE)
etree.register_namespace("dc", PURL_NAMESPACE)
OPF_NS = {None: OPF_NAMESPACE} # the default namespace (no prefix)
NSMAP = {'dc': PURL_NAMESPACE, 'opf': OPF_NAMESPACE}
def updateEpub(src, dest, filename, data, ):
# create a temp copy of the archive without filename
with zipfile.ZipFile(src, 'r') as zin:
with zipfile.ZipFile(dest, 'w') as zout:
zout.comment = zin.comment # preserve the comment
for item in zin.infolist():
if item.filename != filename:
zout.writestr(item, zin.read(item.filename))
# now add filename with its new data
with zipfile.ZipFile(dest, mode='a', compression=zipfile.ZIP_DEFLATED) as zf:
zf.writestr(filename, data)
def get_content_opf(file_path, ns=default_ns):
epubZip = zipfile.ZipFile(file_path)
txt = epubZip.read('META-INF/container.xml')
tree = etree.fromstring(txt)
cf_name = tree.xpath('n:rootfiles/n:rootfile/@full-path', namespaces=ns)[0]
cf = epubZip.read(cf_name)
return etree.fromstring(cf), cf_name
def create_new_metadata_backup(book, custom_columns, export_language, translated_cover_name, lang_type=3):
# generate root package element
package = etree.Element(OPF + "package", nsmap=OPF_NS)
package.set("unique-identifier", "uuid_id")
package.set("version", "2.0")
# generate metadata element and all sub elements of it
metadata = etree.SubElement(package, "metadata", nsmap=NSMAP)
identifier = etree.SubElement(metadata, PURL + "identifier", id="calibre_id", nsmap=NSMAP)
identifier.set(OPF + "scheme", "calibre")
identifier.text = str(book.id)
identifier2 = etree.SubElement(metadata, PURL + "identifier", id="uuid_id", nsmap=NSMAP)
identifier2.set(OPF + "scheme", "uuid")
identifier2.text = book.uuid
for i in book.identifiers:
identifier = etree.SubElement(metadata, PURL + "identifier", nsmap=NSMAP)
identifier.set(OPF + "scheme", i.format_type())
identifier.text = str(i.val)
title = etree.SubElement(metadata, PURL + "title", nsmap=NSMAP)
title.text = book.title
for author in book.authors:
creator = etree.SubElement(metadata, PURL + "creator", nsmap=NSMAP)
creator.text = str(author.name)
creator.set(OPF + "file-as", book.author_sort) # ToDo Check
creator.set(OPF + "role", "aut")
contributor = etree.SubElement(metadata, PURL + "contributor", nsmap=NSMAP)
contributor.text = "calibre (5.7.2) [https://calibre-ebook.com]"
contributor.set(OPF + "file-as", "calibre") # ToDo Check
contributor.set(OPF + "role", "bkp")
date = etree.SubElement(metadata, PURL + "date", nsmap=NSMAP)
date.text = '{d.year:04}-{d.month:02}-{d.day:02}T{d.hour:02}:{d.minute:02}:{d.second:02}'.format(d=book.pubdate)
if book.comments and book.comments[0].text:
for b in book.comments:
description = etree.SubElement(metadata, PURL + "description", nsmap=NSMAP)
description.text = b.text
for b in book.publishers:
publisher = etree.SubElement(metadata, PURL + "publisher", nsmap=NSMAP)
publisher.text = str(b.name)
if not book.languages:
language = etree.SubElement(metadata, PURL + "language", nsmap=NSMAP)
language.text = export_language
else:
for b in book.languages:
language = etree.SubElement(metadata, PURL + "language", nsmap=NSMAP)
language.text = str(b.lang_code) if lang_type == 3 else isoLanguages.get(part3=b.lang_code).part1
for b in book.tags:
subject = etree.SubElement(metadata, PURL + "subject", nsmap=NSMAP)
subject.text = str(b.name)
etree.SubElement(metadata, "meta", name="calibre:author_link_map",
content="{" + ", ".join(['"' + str(a.name) + '": ""' for a in book.authors]) + "}",
nsmap=NSMAP)
for b in book.series:
etree.SubElement(metadata, "meta", name="calibre:series",
content=str(str(b.name)),
nsmap=NSMAP)
if book.series:
etree.SubElement(metadata, "meta", name="calibre:series_index",
content=str(book.series_index),
nsmap=NSMAP)
if len(book.ratings) and book.ratings[0].rating > 0:
etree.SubElement(metadata, "meta", name="calibre:rating",
content=str(book.ratings[0].rating),
nsmap=NSMAP)
etree.SubElement(metadata, "meta", name="calibre:timestamp",
content='{d.year:04}-{d.month:02}-{d.day:02}T{d.hour:02}:{d.minute:02}:{d.second:02}'.format(
d=book.timestamp),
nsmap=NSMAP)
etree.SubElement(metadata, "meta", name="calibre:title_sort",
content=book.sort,
nsmap=NSMAP)
sequence = 0
for cc in custom_columns:
value = None
extra = None
cc_entry = getattr(book, "custom_column_" + str(cc.id))
if cc_entry.__len__():
value = [c.value for c in cc_entry] if cc.is_multiple else cc_entry[0].value
extra = cc_entry[0].extra if hasattr(cc_entry[0], "extra") else None
etree.SubElement(metadata, "meta", name="calibre:user_metadata:#{}".format(cc.label),
content=cc.to_json(value, extra, sequence),
nsmap=NSMAP)
sequence += 1
# generate guide element and all sub elements of it
# Title is translated from default export language
guide = etree.SubElement(package, "guide")
etree.SubElement(guide, "reference", type="cover", title=translated_cover_name, href="cover.jpg")
return package
def replace_metadata(tree, package):
rep_element = tree.xpath('/pkg:package/pkg:metadata', namespaces=default_ns)[0]
new_element = package.xpath('//metadata', namespaces=default_ns)[0]
tree.replace(rep_element, new_element)
return etree.tostring(tree,
xml_declaration=True,
encoding='utf-8',
pretty_print=True).decode('utf-8')

@ -17,7 +17,6 @@
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import traceback
from flask import render_template
from werkzeug.exceptions import default_exceptions
try:
@ -43,9 +42,8 @@ def error_http(error):
def internal_error(error):
return render_template('http_error.html',
error_code="500 Internal Server Error",
error_name='The server encountered an internal error and was unable to complete your '
'request. There is an error in the application.',
error_code="Internal Server Error",
error_name=str(error),
issue=True,
unconfigured=False,
error_stack=traceback.format_exc().split("\n"),

@ -38,19 +38,19 @@ def get_fb2_info(tmp_file_path, original_file_extension):
if len(last_name):
last_name = last_name[0]
else:
last_name = ''
last_name = u''
middle_name = element.xpath('fb:middle-name/text()', namespaces=ns)
if len(middle_name):
middle_name = middle_name[0]
else:
middle_name = ''
middle_name = u''
first_name = element.xpath('fb:first-name/text()', namespaces=ns)
if len(first_name):
first_name = first_name[0]
else:
first_name = ''
return (first_name + ' '
+ middle_name + ' '
first_name = u''
return (first_name + u' '
+ middle_name + u' '
+ last_name)
author = str(", ".join(map(get_author, authors)))
@ -59,12 +59,12 @@ def get_fb2_info(tmp_file_path, original_file_extension):
if len(title):
title = str(title[0])
else:
title = ''
title = u''
description = tree.xpath('/fb:FictionBook/fb:description/fb:publish-info/fb:book-name/text()', namespaces=ns)
if len(description):
description = str(description[0])
else:
description = ''
description = u''
return BookMeta(
file_path=tmp_file_path,
@ -77,6 +77,4 @@ def get_fb2_info(tmp_file_path, original_file_extension):
series="",
series_id="",
languages="",
publisher="",
pubdate="",
identifiers=[])
publisher="")

@ -1,32 +0,0 @@
# -*- coding: utf-8 -*-
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
# Copyright (C) 2023 OzzieIsaacs
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
from tempfile import gettempdir
import os
import shutil
def get_temp_dir():
tmp_dir = os.path.join(gettempdir(), 'calibre_web')
if not os.path.isdir(tmp_dir):
os.mkdir(tmp_dir)
return tmp_dir
def del_temp_dir():
tmp_dir = os.path.join(gettempdir(), 'calibre_web')
shutil.rmtree(tmp_dir)

@ -1,95 +0,0 @@
# -*- coding: utf-8 -*-
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
# Copyright (C) 2020 mmonkey
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
from . import logger
from .constants import CACHE_DIR
from os import makedirs, remove
from os.path import isdir, isfile, join
from shutil import rmtree
class FileSystem:
_instance = None
_cache_dir = CACHE_DIR
def __new__(cls):
if cls._instance is None:
cls._instance = super(FileSystem, cls).__new__(cls)
cls.log = logger.create()
return cls._instance
def get_cache_dir(self, cache_type=None):
if not isdir(self._cache_dir):
try:
makedirs(self._cache_dir)
except OSError:
self.log.info(f'Failed to create path {self._cache_dir} (Permission denied).')
raise
path = join(self._cache_dir, cache_type)
if cache_type and not isdir(path):
try:
makedirs(path)
except OSError:
self.log.info(f'Failed to create path {path} (Permission denied).')
raise
return path if cache_type else self._cache_dir
def get_cache_file_dir(self, filename, cache_type=None):
path = join(self.get_cache_dir(cache_type), filename[:2])
if not isdir(path):
try:
makedirs(path)
except OSError:
self.log.info(f'Failed to create path {path} (Permission denied).')
raise
return path
def get_cache_file_path(self, filename, cache_type=None):
return join(self.get_cache_file_dir(filename, cache_type), filename) if filename else None
def get_cache_file_exists(self, filename, cache_type=None):
path = self.get_cache_file_path(filename, cache_type)
return isfile(path)
def delete_cache_dir(self, cache_type=None):
if not cache_type and isdir(self._cache_dir):
try:
rmtree(self._cache_dir)
except OSError:
self.log.info(f'Failed to delete path {self._cache_dir} (Permission denied).')
raise
path = join(self._cache_dir, cache_type)
if cache_type and isdir(path):
try:
rmtree(path)
except OSError:
self.log.info(f'Failed to delete path {path} (Permission denied).')
raise
def delete_cache_file(self, filename, cache_type=None):
path = self.get_cache_file_path(filename, cache_type)
if isfile(path):
try:
remove(path)
except OSError:
self.log.info(f'Failed to delete path {path} (Permission denied).')
raise

@ -23,6 +23,7 @@
import os
import hashlib
import json
import tempfile
from uuid import uuid4
from time import time
from shutil import move, copyfile
@ -33,7 +34,6 @@ from flask_login import login_required
from . import logger, gdriveutils, config, ub, calibre_db, csrf
from .admin import admin_required
from .file_helper import get_temp_dir
gdrive = Blueprint('gdrive', __name__, url_prefix='/gdrive')
log = logger.create()
@ -55,7 +55,7 @@ def authenticate_google_drive():
try:
authUrl = gdriveutils.Gauth.Instance().auth.GetAuthUrl()
except gdriveutils.InvalidConfigError:
flash(_('Google Drive setup not completed, try to deactivate and activate Google Drive again'),
flash(_(u'Google Drive setup not completed, try to deactivate and activate Google Drive again'),
category="error")
return redirect(url_for('web.index'))
return redirect(authUrl)
@ -91,9 +91,9 @@ def watch_gdrive():
config.save()
except HttpError as e:
reason=json.loads(e.content)['error']['errors'][0]
if reason['reason'] == 'push.webhookUrlUnauthorized':
flash(_('Callback domain is not verified, '
'please follow steps to verify domain in google developer console'), category="error")
if reason['reason'] == u'push.webhookUrlUnauthorized':
flash(_(u'Callback domain is not verified, '
u'please follow steps to verify domain in google developer console'), category="error")
else:
flash(reason['message'], category="error")
@ -139,7 +139,9 @@ try:
dbpath = os.path.join(config.config_calibre_dir, "metadata.db").encode()
if not response['deleted'] and response['file']['title'] == 'metadata.db' \
and response['file']['md5Checksum'] != hashlib.md5(dbpath): # nosec
tmp_dir = get_temp_dir()
tmp_dir = os.path.join(tempfile.gettempdir(), 'calibre_web')
if not os.path.isdir(tmp_dir):
os.mkdir(tmp_dir)
log.info('Database file updated')
copyfile(dbpath, os.path.join(tmp_dir, "metadata.db_" + str(current_milli_time())))
@ -150,7 +152,7 @@ try:
move(os.path.join(tmp_dir, "tmp_metadata.db"), dbpath)
calibre_db.reconnect_db(config, ub.app_DB_path)
except Exception as ex:
log.error_or_exception(ex)
log.debug_or_exception(ex)
return ''
except AttributeError:
pass

@ -32,9 +32,13 @@ try:
from sqlalchemy.orm import declarative_base
except ImportError:
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.exc import OperationalError, InvalidRequestError, IntegrityError
from sqlalchemy.orm.exc import StaleDataError
from sqlalchemy.exc import OperationalError, InvalidRequestError
from sqlalchemy.sql.expression import text
#try:
# from six import __version__ as six_version
#except ImportError:
# six_version = "not installed"
try:
from httplib2 import __version__ as httplib2_version
except ImportError:
@ -63,7 +67,7 @@ except ImportError as err:
importError = err
gdrive_support = False
from . import logger, cli_param, config
from . import logger, cli, config
from .constants import CONFIG_DIR as _CONFIG_DIR
@ -77,7 +81,7 @@ if gdrive_support:
if not logger.is_debug_enabled():
logger.get('googleapiclient.discovery').setLevel(logger.logging.ERROR)
else:
log.debug("Cannot import pydrive, httplib2, using gdrive will not work: {}".format(importError))
log.debug("Cannot import pydrive,httplib2, using gdrive will not work: %s", importError)
class Singleton:
@ -137,16 +141,15 @@ class Gdrive:
def __init__(self):
self.drive = getDrive(gauth=Gauth.Instance().auth)
def is_gdrive_ready():
return os.path.exists(SETTINGS_YAML) and os.path.exists(CREDENTIALS)
engine = create_engine('sqlite:///{0}'.format(cli_param.gd_path), echo=False)
engine = create_engine('sqlite:///{0}'.format(cli.gdpath), echo=False)
Base = declarative_base()
# Open session for database connection
Session = sessionmaker(autoflush=False)
Session = sessionmaker()
Session.configure(bind=engine)
session = scoped_session(Session)
@ -173,12 +176,30 @@ class PermissionAdded(Base):
return str(self.gdrive_id)
if not os.path.exists(cli_param.gd_path):
def migrate():
if not engine.dialect.has_table(engine.connect(), "permissions_added"):
PermissionAdded.__table__.create(bind = engine)
for sql in session.execute(text("select sql from sqlite_master where type='table'")):
if 'CREATE TABLE gdrive_ids' in sql[0]:
currUniqueConstraint = 'UNIQUE (gdrive_id)'
if currUniqueConstraint in sql[0]:
sql=sql[0].replace(currUniqueConstraint, 'UNIQUE (gdrive_id, path)')
sql=sql.replace(GdriveId.__tablename__, GdriveId.__tablename__ + '2')
session.execute(sql)
session.execute("INSERT INTO gdrive_ids2 (id, gdrive_id, path) SELECT id, "
"gdrive_id, path FROM gdrive_ids;")
session.commit()
session.execute('DROP TABLE %s' % 'gdrive_ids')
session.execute('ALTER TABLE gdrive_ids2 RENAME to gdrive_ids')
break
if not os.path.exists(cli.gdpath):
try:
Base.metadata.create_all(engine)
except Exception as ex:
log.error("Error connect to database: {} - {}".format(cli_param.gd_path, ex))
log.error("Error connect to database: {} - {}".format(cli.gdpath, ex))
raise
migrate()
def getDrive(drive=None, gauth=None):
@ -192,9 +213,9 @@ def getDrive(drive=None, gauth=None):
try:
gauth.Refresh()
except RefreshError as e:
log.error("Google Drive error: {}".format(e))
log.error("Google Drive error: %s", e)
except Exception as ex:
log.error_or_exception(ex)
log.debug_or_exception(ex)
else:
# Initialize the saved creds
gauth.Authorize()
@ -204,7 +225,7 @@ def getDrive(drive=None, gauth=None):
try:
drive.auth.Refresh()
except RefreshError as e:
log.error("Google Drive error: {}".format(e))
log.error("Google Drive error: %s", e)
return drive
def listRootFolders():
@ -213,7 +234,7 @@ def listRootFolders():
folder = "'root' in parents and mimeType = 'application/vnd.google-apps.folder' and trashed = false"
fileList = drive.ListFile({'q': folder}).GetList()
except (ServerNotFoundError, ssl.SSLError, RefreshError) as e:
log.info("GDrive Error {}".format(e))
log.info("GDrive Error %s" % e)
fileList = []
return fileList
@ -251,7 +272,8 @@ def getEbooksFolderId(drive=None):
try:
session.commit()
except OperationalError as ex:
log.error_or_exception('Database error: {}'.format(ex))
log.error("gdrive.db DB is not Writeable")
log.debug('Database error: %s', ex)
session.rollback()
return gDriveId.gdrive_id
@ -267,7 +289,6 @@ def getFile(pathId, fileName, drive):
def getFolderId(path, drive):
# drive = getDrive(drive)
currentFolderId = None
try:
currentFolderId = getEbooksFolderId(drive)
sqlCheckPath = path if path[-1] == '/' else path + '/'
@ -300,8 +321,9 @@ def getFolderId(path, drive):
session.commit()
else:
currentFolderId = storedPathName.gdrive_id
except (OperationalError, IntegrityError, StaleDataError) as ex:
log.error_or_exception('Database error: {}'.format(ex))
except OperationalError as ex:
log.error("gdrive.db DB is not Writeable")
log.debug('Database error: %s', ex)
session.rollback()
except ApiRequestError as ex:
log.error('{} {}'.format(ex.error['message'], path))
@ -325,7 +347,7 @@ def getFileFromEbooksFolder(path, fileName):
def moveGdriveFileRemote(origin_file_id, new_title):
origin_file_id['title'] = new_title
origin_file_id['title']= new_title
origin_file_id.Upload()
@ -403,7 +425,7 @@ def copyToDrive(drive, uploadFile, createRoot, replaceFiles,
driveFile.Upload()
def uploadFileToEbooksFolder(destFile, f, string=False):
def uploadFileToEbooksFolder(destFile, f):
drive = getDrive(Gdrive.Instance().drive)
parent = getEbooksFolder(drive)
splitDir = destFile.split('/')
@ -416,10 +438,7 @@ def uploadFileToEbooksFolder(destFile, f, string=False):
else:
driveFile = drive.CreateFile({'title': x,
'parents': [{"kind": "drive#fileLink", 'id': parent['id']}], })
if not string:
driveFile.SetContentFile(f)
else:
driveFile.SetContentString(f)
driveFile.SetContentFile(f)
driveFile.Upload()
else:
existing_Folder = drive.ListFile({'q': "title = '%s' and '%s' in parents and trashed = false" %
@ -528,8 +547,8 @@ def deleteDatabaseOnChange():
session.commit()
except (OperationalError, InvalidRequestError) as ex:
session.rollback()
log.error_or_exception('Database error: {}'.format(ex))
session.rollback()
log.debug('Database error: %s', ex)
log.error(u"GDrive DB is not Writeable")
def updateGdriveCalibreFromLocal():
@ -540,14 +559,15 @@ def updateGdriveCalibreFromLocal():
# update gdrive.db on edit of books title
def updateDatabaseOnEdit(ID,newPath):
sqlCheckPath = newPath if newPath[-1] == '/' else newPath + '/'
sqlCheckPath = newPath if newPath[-1] == '/' else newPath + u'/'
storedPathName = session.query(GdriveId).filter(GdriveId.gdrive_id == ID).first()
if storedPathName:
storedPathName.path = sqlCheckPath
try:
session.commit()
except OperationalError as ex:
log.error_or_exception('Database error: {}'.format(ex))
log.error("gdrive.db DB is not Writeable")
log.debug('Database error: %s', ex)
session.rollback()
@ -557,12 +577,12 @@ def deleteDatabaseEntry(ID):
try:
session.commit()
except OperationalError as ex:
log.error_or_exception('Database error: {}'.format(ex))
log.error("gdrive.db DB is not Writeable")
log.debug('Database error: %s', ex)
session.rollback()
# Gets cover file from gdrive
# ToDo: Check is this right everyone get read permissions on cover files?
def get_cover_via_gdrive(cover_path):
df = getFileFromEbooksFolder(cover_path, 'cover.jpg')
if df:
@ -579,30 +599,8 @@ def get_cover_via_gdrive(cover_path):
try:
session.commit()
except OperationalError as ex:
log.error_or_exception('Database error: {}'.format(ex))
session.rollback()
return df.metadata.get('webContentLink')
else:
return None
# Gets cover file from gdrive
def get_metadata_backup_via_gdrive(metadata_path):
df = getFileFromEbooksFolder(metadata_path, 'metadata.opf')
if df:
if not session.query(PermissionAdded).filter(PermissionAdded.gdrive_id == df['id']).first():
df.GetPermissions()
df.InsertPermission({
'type': 'anyone',
'value': 'anyone',
'role': 'writer', # ToDo needs write access
'withLink': True})
permissionAdded = PermissionAdded()
permissionAdded.gdrive_id = df['id']
session.add(permissionAdded)
try:
session.commit()
except OperationalError as ex:
log.error_or_exception('Database error: {}'.format(ex))
log.error("gdrive.db DB is not Writeable")
log.debug('Database error: %s', ex)
session.rollback()
return df.metadata.get('webContentLink')
else:
@ -624,7 +622,7 @@ def do_gdrive_download(df, headers, convert_encoding=False):
def stream(convert_encoding):
for byte in s:
headers = {"Range": 'bytes={}-{}'.format(byte[0], byte[1])}
headers = {"Range": 'bytes=%s-%s' % (byte[0], byte[1])}
resp, content = df.auth.Get_Http_Object().request(download_url, headers=headers)
if resp.status == 206:
if convert_encoding:
@ -632,7 +630,7 @@ def do_gdrive_download(df, headers, convert_encoding=False):
content = content.decode(result['encoding']).encode('utf-8')
yield content
else:
log.warning('An error occurred: {}'.format(resp))
log.warning('An error occurred: %s', resp)
return
return Response(stream_with_context(stream(convert_encoding)), headers=headers)
@ -689,3 +687,8 @@ def get_error_text(client_secrets=None):
return 'Callback url (redirect url) is missing in client_secrets.json'
if client_secrets:
client_secrets.update(filedata['web'])
def get_versions():
return { # 'six': six_version,
'httplib2': httplib2_version}

@ -1,29 +0,0 @@
# -*- coding: utf-8 -*-
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
# Copyright (C) 2022 OzzieIsaacs
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
from gevent.pywsgi import WSGIHandler
class MyWSGIHandler(WSGIHandler):
def get_environ(self):
env = super().get_environ()
path, __ = self.path.split('?', 1) if '?' in self.path else (self.path, '')
env['RAW_URI'] = path
return env

File diff suppressed because it is too large Load Diff

@ -49,7 +49,7 @@ except ImportError:
def get_language_names(locale):
return _LANGUAGE_NAMES.get(str(locale))
return _LANGUAGE_NAMES.get(locale)
def get_language_name(locale, lang_code):

File diff suppressed because it is too large Load Diff

@ -22,17 +22,17 @@
# custom jinja filters
from markupsafe import escape
import datetime
import mimetypes
from uuid import uuid4
# from babel.dates import format_date
from babel.dates import format_date
from flask import Blueprint, request, url_for
from flask_babel import format_date
from flask_babel import get_locale
from flask_login import current_user
from markupsafe import escape
from . import logger
from . import constants, logger
jinjia = Blueprint('jinjia', __name__)
log = logger.create()
@ -77,7 +77,7 @@ def mimetype_filter(val):
@jinjia.app_template_filter('formatdate')
def formatdate_filter(val):
try:
return format_date(val, format='medium')
return format_date(val, format='medium', locale=get_locale())
except AttributeError as e:
log.error('Babel error: %s, Current user locale: %s, Current User: %s', e,
current_user.locale,
@ -124,59 +124,16 @@ def formatseriesindex_filter(series_index):
return int(series_index)
else:
return series_index
except (ValueError, TypeError):
except ValueError:
return series_index
return 0
@jinjia.app_template_filter('escapedlink')
def escapedlink_filter(url, text):
return "<a href='{}'>{}</a>".format(url, escape(text))
@jinjia.app_template_filter('uuidfilter')
def uuidfilter(var):
return uuid4()
@jinjia.app_template_filter('cache_timestamp')
def cache_timestamp(rolling_period='month'):
if rolling_period == 'day':
return str(int(datetime.datetime.today().replace(hour=1, minute=1).timestamp()))
elif rolling_period == 'year':
return str(int(datetime.datetime.today().replace(day=1).timestamp()))
else:
return str(int(datetime.datetime.today().replace(month=1, day=1).timestamp()))
@jinjia.app_template_filter('last_modified')
def book_last_modified(book):
return str(int(book.last_modified.timestamp()))
@jinjia.app_template_filter('get_cover_srcset')
def get_cover_srcset(book):
srcset = list()
resolutions = {
constants.COVER_THUMBNAIL_SMALL: 'sm',
constants.COVER_THUMBNAIL_MEDIUM: 'md',
constants.COVER_THUMBNAIL_LARGE: 'lg'
}
for resolution, shortname in resolutions.items():
url = url_for('web.get_cover', book_id=book.id, resolution=shortname, c=book_last_modified(book))
srcset.append(f'{url} {resolution}x')
return ', '.join(srcset)
@jinjia.app_template_filter('get_series_srcset')
def get_cover_srcset(series):
srcset = list()
resolutions = {
constants.COVER_THUMBNAIL_SMALL: 'sm',
constants.COVER_THUMBNAIL_MEDIUM: 'md',
constants.COVER_THUMBNAIL_LARGE: 'lg'
}
for resolution, shortname in resolutions.items():
url = url_for('web.get_series_cover', series_id=series.id, resolution=shortname, c=cache_timestamp())
srcset.append(f'{url} {resolution}x')
return ', '.join(srcset)

@ -21,7 +21,6 @@ import base64
import datetime
import os
import uuid
import zipfile
from time import gmtime, strftime
import json
from urllib.parse import unquote
@ -46,9 +45,7 @@ import requests
from . import config, logger, kobo_auth, db, calibre_db, helper, shelf as shelf_lib, ub, csrf, kobo_sync_status
from . import isoLanguages
from .epub import get_epub_layout
from .constants import COVER_THUMBNAIL_SMALL #, sqlalchemy_version2
from .constants import sqlalchemy_version2
from .helper import get_download_link
from .services import SyncToken as SyncToken
from .web import download_required
@ -56,7 +53,7 @@ from .kobo_auth import requires_kobo_auth, get_auth_token
KOBO_FORMATS = {"KEPUB": ["KEPUB"], "EPUB": ["EPUB3", "EPUB"]}
KOBO_STOREAPI_URL = "https://storeapi.kobo.com"
KOBO_IMAGEHOST_URL = "https://cdn.kobo.com/book-images"
KOBO_IMAGEHOST_URL = "https://kbimages1-a.akamaihd.net"
SYNC_ITEM_LIMIT = 100
@ -137,15 +134,11 @@ def convert_to_kobo_timestamp_string(timestamp):
@kobo.route("/v1/library/sync")
@requires_kobo_auth
# @download_required
@download_required
def HandleSyncRequest():
if not current_user.role_download():
log.info("Users need download permissions for syncing library to Kobo reader")
return abort(403)
sync_token = SyncToken.SyncToken.from_headers(request.headers)
log.info("Kobo library sync request received")
log.info("Kobo library sync request received.")
log.debug("SyncToken: {}".format(sync_token))
log.debug("Download link format {}".format(get_download_url_for_book('[bookid]','[bookformat]')))
if not current_app.wsgi_app.is_proxied:
log.debug('Kobo: Received unproxied request, changed request port to external server port')
@ -155,60 +148,75 @@ def HandleSyncRequest():
sync_token.books_last_created = datetime.datetime.min
sync_token.reading_state_last_modified = datetime.datetime.min
new_books_last_modified = sync_token.books_last_modified # needed for sync selected shelfs only
new_books_last_created = sync_token.books_last_created # needed to distinguish between new and changed entitlement
new_books_last_modified = sync_token.books_last_modified # needed for sync selected shelfs only
new_books_last_created = sync_token.books_last_created # needed to distinguish between new and changed entitlement
new_reading_state_last_modified = sync_token.reading_state_last_modified
new_archived_last_modified = datetime.datetime.min
sync_results = []
# We reload the book database so that the user gets a fresh view of the library
# We reload the book database so that the user get's a fresh view of the library
# in case of external changes (e.g: adding a book through Calibre).
calibre_db.reconnect_db(config, ub.app_DB_path)
only_kobo_shelves = current_user.kobo_only_shelves_sync
if only_kobo_shelves:
changed_entries = calibre_db.session.query(db.Books,
ub.ArchivedBook.last_modified,
ub.BookShelf.date_added,
ub.ArchivedBook.is_archived)
if sqlalchemy_version2:
changed_entries = select(db.Books,
ub.ArchivedBook.last_modified,
ub.BookShelf.date_added,
ub.ArchivedBook.is_archived)
else:
changed_entries = calibre_db.session.query(db.Books,
ub.ArchivedBook.last_modified,
ub.BookShelf.date_added,
ub.ArchivedBook.is_archived)
changed_entries = (changed_entries
.join(db.Data).outerjoin(ub.ArchivedBook, and_(db.Books.id == ub.ArchivedBook.book_id,
ub.ArchivedBook.user_id == current_user.id))
.filter(db.Books.id.notin_(calibre_db.session.query(ub.KoboSyncedBooks.book_id)
.filter(ub.KoboSyncedBooks.user_id == current_user.id)))
.filter(ub.BookShelf.date_added > sync_token.books_last_modified)
.filter(db.Data.format.in_(KOBO_FORMATS))
.filter(calibre_db.common_filters(allow_show_archived=True))
.order_by(db.Books.id)
.order_by(ub.ArchivedBook.last_modified)
.join(ub.BookShelf, db.Books.id == ub.BookShelf.book_id)
.join(ub.Shelf)
.filter(ub.Shelf.user_id == current_user.id)
.filter(ub.Shelf.kobo_sync)
.distinct())
.filter(ub.KoboSyncedBooks.user_id == current_user.id)))
.filter(ub.BookShelf.date_added > sync_token.books_last_modified)
.filter(db.Data.format.in_(KOBO_FORMATS))
.filter(calibre_db.common_filters(allow_show_archived=True))
.order_by(db.Books.id)
.order_by(ub.ArchivedBook.last_modified)
.join(ub.BookShelf, db.Books.id == ub.BookShelf.book_id)
.join(ub.Shelf)
.filter(ub.Shelf.user_id == current_user.id)
.filter(ub.Shelf.kobo_sync)
.distinct()
)
else:
changed_entries = calibre_db.session.query(db.Books,
ub.ArchivedBook.last_modified,
ub.ArchivedBook.is_archived)
if sqlalchemy_version2:
changed_entries = select(db.Books, ub.ArchivedBook.last_modified, ub.ArchivedBook.is_archived)
else:
changed_entries = calibre_db.session.query(db.Books,
ub.ArchivedBook.last_modified,
ub.ArchivedBook.is_archived)
changed_entries = (changed_entries
.join(db.Data).outerjoin(ub.ArchivedBook, and_(db.Books.id == ub.ArchivedBook.book_id,
ub.ArchivedBook.user_id == current_user.id))
.filter(db.Books.id.notin_(calibre_db.session.query(ub.KoboSyncedBooks.book_id)
.filter(ub.KoboSyncedBooks.user_id == current_user.id)))
.filter(calibre_db.common_filters(allow_show_archived=True))
.filter(db.Data.format.in_(KOBO_FORMATS))
.order_by(db.Books.last_modified)
.order_by(db.Books.id))
.join(db.Data).outerjoin(ub.ArchivedBook, and_(db.Books.id == ub.ArchivedBook.book_id,
ub.ArchivedBook.user_id == current_user.id))
.filter(db.Books.id.notin_(calibre_db.session.query(ub.KoboSyncedBooks.book_id)
.filter(ub.KoboSyncedBooks.user_id == current_user.id)))
.filter(calibre_db.common_filters(allow_show_archived=True))
.filter(db.Data.format.in_(KOBO_FORMATS))
.order_by(db.Books.last_modified)
.order_by(db.Books.id)
)
reading_states_in_new_entitlements = []
books = changed_entries.limit(SYNC_ITEM_LIMIT)
if sqlalchemy_version2:
books = calibre_db.session.execute(changed_entries.limit(SYNC_ITEM_LIMIT))
else:
books = changed_entries.limit(SYNC_ITEM_LIMIT)
log.debug("Books to Sync: {}".format(len(books.all())))
for book in books:
formats = [data.format for data in book.Books.data]
if 'KEPUB' not in formats and config.config_kepubifypath and 'EPUB' in formats:
helper.convert_book_format(book.Books.id, config.get_book_path(), 'EPUB', 'KEPUB', current_user.name)
if not 'KEPUB' in formats and config.config_kepubifypath and 'EPUB' in formats:
helper.convert_book_format(book.Books.id, config.config_calibre_dir, 'EPUB', 'KEPUB', current_user.name)
kobo_reading_state = get_or_create_reading_state(book.Books.id)
entitlement = {
@ -221,7 +229,7 @@ def HandleSyncRequest():
new_reading_state_last_modified = max(new_reading_state_last_modified, kobo_reading_state.last_modified)
reading_states_in_new_entitlements.append(book.Books.id)
ts_created = book.Books.timestamp.replace(tzinfo=None)
ts_created = book.Books.timestamp
try:
ts_created = max(ts_created, book.date_added)
@ -234,7 +242,7 @@ def HandleSyncRequest():
sync_results.append({"ChangedEntitlement": entitlement})
new_books_last_modified = max(
book.Books.last_modified.replace(tzinfo=None), new_books_last_modified
book.Books.last_modified, new_books_last_modified
)
try:
new_books_last_modified = max(
@ -246,16 +254,27 @@ def HandleSyncRequest():
new_books_last_created = max(ts_created, new_books_last_created)
kobo_sync_status.add_synced_books(book.Books.id)
max_change = changed_entries.filter(ub.ArchivedBook.is_archived)\
.filter(ub.ArchivedBook.user_id == current_user.id) \
.order_by(func.datetime(ub.ArchivedBook.last_modified).desc()).first()
if sqlalchemy_version2:
max_change = calibre_db.session.execute(changed_entries
.filter(ub.ArchivedBook.is_archived)
.filter(ub.ArchivedBook.user_id == current_user.id)
.order_by(func.datetime(ub.ArchivedBook.last_modified).desc()))\
.columns(db.Books).first()
else:
max_change = changed_entries.from_self().filter(ub.ArchivedBook.is_archived)\
.filter(ub.ArchivedBook.user_id==current_user.id) \
.order_by(func.datetime(ub.ArchivedBook.last_modified).desc()).first()
max_change = max_change.last_modified if max_change else new_archived_last_modified
new_archived_last_modified = max(new_archived_last_modified, max_change)
# no. of books returned
book_count = changed_entries.count()
if sqlalchemy_version2:
entries = calibre_db.session.execute(changed_entries).all()
book_count = len(entries)
else:
book_count = changed_entries.count()
# last entry:
cont_sync = bool(book_count)
log.debug("Remaining books to Sync: {}".format(book_count))
@ -318,7 +337,7 @@ def generate_sync_response(sync_token, sync_results, set_cont=False):
extra_headers["x-kobo-recent-reads"] = store_response.headers.get("x-kobo-recent-reads")
except Exception as ex:
log.error_or_exception("Failed to receive or parse response from Kobo's sync endpoint: {}".format(ex))
log.error("Failed to receive or parse response from Kobo's sync endpoint: {}".format(ex))
if set_cont:
extra_headers["x-kobo-sync"] = "continue"
sync_token.to_headers(extra_headers)
@ -339,7 +358,7 @@ def HandleMetadataRequest(book_uuid):
log.info("Kobo library metadata request received for book %s" % book_uuid)
book = calibre_db.get_book_by_uuid(book_uuid)
if not book or not book.data:
log.info("Book %s not found in database", book_uuid)
log.info(u"Book %s not found in database", book_uuid)
return redirect_or_proxy_request()
metadata = get_metadata(book)
@ -348,7 +367,7 @@ def HandleMetadataRequest(book_uuid):
return response
def get_download_url_for_book(book_id, book_format):
def get_download_url_for_book(book, book_format):
if not current_app.wsgi_app.is_proxied:
if ':' in request.host and not request.host.endswith(']'):
host = "".join(request.host.split(':')[:-1])
@ -360,13 +379,13 @@ def get_download_url_for_book(book_id, book_format):
url_base=host,
url_port=config.config_external_port,
auth_token=get_auth_token(),
book_id=book_id,
book_id=book.id,
book_format=book_format.lower()
)
return url_for(
"kobo.download_book",
auth_token=kobo_auth.get_auth_token(),
book_id=book_id,
book_id=book.id,
book_format=book_format.lower(),
_external=True,
)
@ -406,9 +425,9 @@ def get_author(book):
author_list = []
autor_roles = []
for author in book.authors:
autor_roles.append({"Name": author.name})
autor_roles.append({"Name":author.name}) #.encode('unicode-escape').decode('latin-1')
author_list.append(author.name)
return {"ContributorRoles": autor_roles, "Contributors": author_list}
return {"ContributorRoles": autor_roles, "Contributors":author_list}
def get_publisher(book):
@ -422,17 +441,10 @@ def get_series(book):
return None
return book.series[0].name
def get_seriesindex(book):
return book.series_index or 1
def get_language(book):
if not book.languages:
return 'en'
return isoLanguages.get(part3=book.languages[0].lang_code).part1
def get_metadata(book):
download_urls = []
kepub = [data for data in book.data if data.format == 'KEPUB']
@ -442,21 +454,16 @@ def get_metadata(book):
continue
for kobo_format in KOBO_FORMATS[book_data.format]:
# log.debug('Id: %s, Format: %s' % (book.id, kobo_format))
try:
if get_epub_layout(book, book_data) == 'pre-paginated':
kobo_format = 'EPUB3FL'
download_urls.append(
{
"Format": kobo_format,
"Size": book_data.uncompressed_size,
"Url": get_download_url_for_book(book.id, book_data.format),
# The Kobo forma accepts platforms: (Generic, Android)
"Platform": "Generic",
# "DrmType": "None", # Not required
}
)
except (zipfile.BadZipfile, FileNotFoundError) as e:
log.error(e)
download_urls.append(
{
"Format": kobo_format,
"Size": book_data.uncompressed_size,
"Url": get_download_url_for_book(book, book_data.format),
# The Kobo forma accepts platforms: (Generic, Android)
"Platform": "Generic",
# "DrmType": "None", # Not required
}
)
book_uuid = book.uuid
metadata = {
@ -475,10 +482,10 @@ def get_metadata(book):
"IsInternetArchive": False,
"IsPreOrder": False,
"IsSocialEnabled": True,
"Language": get_language(book),
"Language": "en",
"PhoneticPronunciations": {},
"PublicationDate": convert_to_kobo_timestamp_string(book.pubdate),
"Publisher": {"Imprint": "", "Name": get_publisher(book), },
"Publisher": {"Imprint": "", "Name": get_publisher(book),},
"RevisionId": book_uuid,
"Title": book.title,
"WorkId": book_uuid,
@ -497,13 +504,12 @@ def get_metadata(book):
return metadata
@csrf.exempt
@kobo.route("/v1/library/tags", methods=["POST", "DELETE"])
@requires_kobo_auth
# Creates a Shelf with the given items, and returns the shelf's uuid.
def HandleTagCreate():
# catch delete requests, otherwise they are handled in the book delete handler
# catch delete requests, otherwise the are handeld in the book delete handler
if request.method == "DELETE":
abort(405)
name, items = None, None
@ -697,12 +703,21 @@ def sync_shelves(sync_token, sync_results, only_kobo_shelves=False):
})
extra_filters.append(ub.Shelf.kobo_sync)
shelflist = ub.session.query(ub.Shelf).outerjoin(ub.BookShelf).filter(
or_(func.datetime(ub.Shelf.last_modified) > sync_token.tags_last_modified,
func.datetime(ub.BookShelf.date_added) > sync_token.tags_last_modified),
ub.Shelf.user_id == current_user.id,
*extra_filters
).distinct().order_by(func.datetime(ub.Shelf.last_modified).asc())
if sqlalchemy_version2:
shelflist = ub.session.execute(select(ub.Shelf).outerjoin(ub.BookShelf).filter(
or_(func.datetime(ub.Shelf.last_modified) > sync_token.tags_last_modified,
func.datetime(ub.BookShelf.date_added) > sync_token.tags_last_modified),
ub.Shelf.user_id == current_user.id,
*extra_filters
).distinct().order_by(func.datetime(ub.Shelf.last_modified).asc())).columns(ub.Shelf)
else:
shelflist = ub.session.query(ub.Shelf).outerjoin(ub.BookShelf).filter(
or_(func.datetime(ub.Shelf.last_modified) > sync_token.tags_last_modified,
func.datetime(ub.BookShelf.date_added) > sync_token.tags_last_modified),
ub.Shelf.user_id == current_user.id,
*extra_filters
).distinct().order_by(func.datetime(ub.Shelf.last_modified).asc())
for shelf in shelflist:
if not shelf_lib.check_shelf_view_permissions(shelf):
@ -739,7 +754,7 @@ def create_kobo_tag(shelf):
for book_shelf in shelf.books:
book = calibre_db.get_book(book_shelf.book_id)
if not book:
log.info("Book (id: %s) in BookShelf (id: %s) not found in book database", book_shelf.book_id, shelf.id)
log.info(u"Book (id: %s) in BookShelf (id: %s) not found in book database", book_shelf.book_id, shelf.id)
continue
tag["Items"].append(
{
@ -749,14 +764,13 @@ def create_kobo_tag(shelf):
)
return {"Tag": tag}
@csrf.exempt
@kobo.route("/v1/library/<book_uuid>/state", methods=["GET", "PUT"])
@requires_kobo_auth
def HandleStateRequest(book_uuid):
book = calibre_db.get_book_by_uuid(book_uuid)
if not book or not book.data:
log.info("Book %s not found in database", book_uuid)
log.info(u"Book %s not found in database", book_uuid)
return redirect_or_proxy_request()
kobo_reading_state = get_or_create_reading_state(book.id)
@ -794,7 +808,7 @@ def HandleStateRequest(book_uuid):
book_read = kobo_reading_state.book_read_link
new_book_read_status = get_ub_read_status(request_status_info["Status"])
if new_book_read_status == ub.ReadBook.STATUS_IN_PROGRESS \
and new_book_read_status != book_read.read_status:
and new_book_read_status != book_read.read_status:
book_read.times_started_reading += 1
book_read.last_time_started_reading = datetime.datetime.utcnow()
book_read.read_status = new_book_read_status
@ -834,7 +848,7 @@ def get_ub_read_status(kobo_read_status):
def get_or_create_reading_state(book_id):
book_read = ub.session.query(ub.ReadBook).filter(ub.ReadBook.book_id == book_id,
ub.ReadBook.user_id == int(current_user.id)).one_or_none()
ub.ReadBook.user_id == int(current_user.id)).one_or_none()
if not book_read:
book_read = ub.ReadBook(user_id=current_user.id, book_id=book_id)
if not book_read.kobo_reading_state:
@ -898,31 +912,26 @@ def get_current_bookmark_response(current_bookmark):
}
return resp
@kobo.route("/<book_uuid>/<width>/<height>/<isGreyscale>/image.jpg", defaults={'Quality': ""})
@kobo.route("/<book_uuid>/<width>/<height>/<Quality>/<isGreyscale>/image.jpg")
@requires_kobo_auth
def HandleCoverImageRequest(book_uuid, width, height, Quality, isGreyscale):
try:
resolution = None if int(height) > 1000 else COVER_THUMBNAIL_SMALL
except ValueError:
log.error("Requested height %s of book %s is invalid" % (book_uuid, height))
resolution = COVER_THUMBNAIL_SMALL
book_cover = helper.get_book_cover_with_uuid(book_uuid, resolution=resolution)
if book_cover:
log.debug("Serving local cover image of book %s" % book_uuid)
return book_cover
if not config.config_kobo_proxy:
log.debug("Returning 404 for cover image of unknown book %s" % book_uuid)
# additional proxy request make no sense, -> direct return
return abort(404)
log.debug("Redirecting request for cover image of unknown book %s to Kobo" % book_uuid)
return redirect(KOBO_IMAGEHOST_URL +
"/{book_uuid}/{width}/{height}/false/image.jpg".format(book_uuid=book_uuid,
width=width,
height=height), 307)
def HandleCoverImageRequest(book_uuid, width, height,Quality, isGreyscale):
book_cover = helper.get_book_cover_with_uuid(
book_uuid, use_generic_cover_on_failure=False
)
if not book_cover:
if config.config_kobo_proxy:
log.debug("Cover for unknown book: %s proxied to kobo" % book_uuid)
return redirect(KOBO_IMAGEHOST_URL +
"/{book_uuid}/{width}/{height}/false/image.jpg".format(book_uuid=book_uuid,
width=width,
height=height), 307)
else:
log.debug("Cover for unknown book: %s requested" % book_uuid)
# additional proxy request make no sense, -> direct return
return make_response(jsonify({}))
log.debug("Cover request received for book %s" % book_uuid)
return book_cover
@kobo.route("")
@ -937,7 +946,7 @@ def HandleBookDeletionRequest(book_uuid):
log.info("Kobo book delete request received for book %s" % book_uuid)
book = calibre_db.get_book_by_uuid(book_uuid)
if not book:
log.info("Book %s not found in database", book_uuid)
log.info(u"Book %s not found in database", book_uuid)
return redirect_or_proxy_request()
book_id = book.id
@ -951,7 +960,7 @@ def HandleBookDeletionRequest(book_uuid):
@csrf.exempt
@kobo.route("/v1/library/<dummy>", methods=["DELETE", "GET"])
def HandleUnimplementedRequest(dummy=None):
log.debug("Unimplemented Library Request received: %s (request is forwarded to kobo if configured)", request.base_url)
log.debug("Unimplemented Library Request received: %s", request.base_url)
return redirect_or_proxy_request()
@ -962,9 +971,8 @@ def HandleUnimplementedRequest(dummy=None):
@kobo.route("/v1/user/wishlist", methods=["GET", "POST"])
@kobo.route("/v1/user/recommendations", methods=["GET", "POST"])
@kobo.route("/v1/analytics/<dummy>", methods=["GET", "POST"])
@kobo.route("/v1/assets", methods=["GET"])
def HandleUserRequest(dummy=None):
log.debug("Unimplemented User Request received: %s (request is forwarded to kobo if configured)", request.base_url)
log.debug("Unimplemented User Request received: %s", request.base_url)
return redirect_or_proxy_request()
@ -983,8 +991,8 @@ def handle_getests():
if config.config_kobo_proxy:
return redirect_or_proxy_request()
else:
testkey = request.headers.get("X-Kobo-userkey", "")
return make_response(jsonify({"Result": "Success", "TestKey": testkey, "Tests": {}}))
testkey = request.headers.get("X-Kobo-userkey","")
return make_response(jsonify({"Result": "Success", "TestKey":testkey, "Tests": {}}))
@csrf.exempt
@ -1004,7 +1012,7 @@ def handle_getests():
@kobo.route("/v1/affiliate", methods=["GET", "POST"])
@kobo.route("/v1/deals", methods=["GET", "POST"])
def HandleProductsRequest(dummy=None):
log.debug("Unimplemented Products Request received: %s (request is forwarded to kobo if configured)", request.base_url)
log.debug("Unimplemented Products Request received: %s", request.base_url)
return redirect_or_proxy_request()
@ -1014,14 +1022,14 @@ def make_calibre_web_auth_response():
content = request.get_json()
AccessToken = base64.b64encode(os.urandom(24)).decode('utf-8')
RefreshToken = base64.b64encode(os.urandom(24)).decode('utf-8')
return make_response(
return make_response(
jsonify(
{
"AccessToken": AccessToken,
"RefreshToken": RefreshToken,
"TokenType": "Bearer",
"TrackingId": str(uuid.uuid4()),
"UserKey": content.get('UserKey',""),
"UserKey": content['UserKey'],
}
)
)
@ -1152,16 +1160,14 @@ def NATIVE_KOBO_RESOURCES():
"eula_page": "https://www.kobo.com/termsofuse?style=onestore",
"exchange_auth": "https://storeapi.kobo.com/v1/auth/exchange",
"external_book": "https://storeapi.kobo.com/v1/products/books/external/{Ids}",
"facebook_sso_page":
"https://authorize.kobo.com/signin/provider/Facebook/login?returnUrl=http://store.kobobooks.com/",
"facebook_sso_page": "https://authorize.kobo.com/signin/provider/Facebook/login?returnUrl=http://store.kobobooks.com/",
"featured_list": "https://storeapi.kobo.com/v1/products/featured/{FeaturedListId}",
"featured_lists": "https://storeapi.kobo.com/v1/products/featured",
"free_books_page": {
"EN": "https://www.kobo.com/{region}/{language}/p/free-ebooks",
"FR": "https://www.kobo.com/{region}/{language}/p/livres-gratuits",
"IT": "https://www.kobo.com/{region}/{language}/p/libri-gratuiti",
"NL": "https://www.kobo.com/{region}/{language}/"
"List/bekijk-het-overzicht-van-gratis-ebooks/QpkkVWnUw8sxmgjSlCbJRg",
"NL": "https://www.kobo.com/{region}/{language}/List/bekijk-het-overzicht-van-gratis-ebooks/QpkkVWnUw8sxmgjSlCbJRg",
"PT": "https://www.kobo.com/{region}/{language}/p/livros-gratis",
},
"fte_feedback": "https://storeapi.kobo.com/v1/products/ftefeedback",
@ -1186,8 +1192,7 @@ def NATIVE_KOBO_RESOURCES():
"library_stack": "https://storeapi.kobo.com/v1/user/library/stacks/{LibraryItemId}",
"library_sync": "https://storeapi.kobo.com/v1/library/sync",
"love_dashboard_page": "https://store.kobobooks.com/{culture}/kobosuperpoints",
"love_points_redemption_page":
"https://store.kobobooks.com/{culture}/KoboSuperPointsRedemption?productId={ProductId}",
"love_points_redemption_page": "https://store.kobobooks.com/{culture}/KoboSuperPointsRedemption?productId={ProductId}",
"magazine_landing_page": "https://store.kobobooks.com/emagazines",
"notifications_registration_issue": "https://storeapi.kobo.com/v1/notifications/registration",
"oauth_host": "https://oauth.kobo.com",
@ -1203,8 +1208,7 @@ def NATIVE_KOBO_RESOURCES():
"product_recommendations": "https://storeapi.kobo.com/v1/products/{ProductId}/recommendations",
"product_reviews": "https://storeapi.kobo.com/v1/products/{ProductIds}/reviews",
"products": "https://storeapi.kobo.com/v1/products",
"provider_external_sign_in_page":
"https://authorize.kobo.com/ExternalSignIn/{providerName}?returnUrl=http://store.kobobooks.com/",
"provider_external_sign_in_page": "https://authorize.kobo.com/ExternalSignIn/{providerName}?returnUrl=http://store.kobobooks.com/",
"purchase_buy": "https://www.kobo.com/checkout/createpurchase/",
"purchase_buy_templated": "https://www.kobo.com/{culture}/checkout/createpurchase/{ProductId}",
"quickbuy_checkout": "https://storeapi.kobo.com/v1/store/quickbuy/{PurchaseId}/checkout",

@ -64,16 +64,54 @@ from datetime import datetime
from os import urandom
from functools import wraps
from flask import g, Blueprint, abort, request
from flask import g, Blueprint, url_for, abort, request
from flask_login import login_user, current_user, login_required
from flask_babel import gettext as _
from flask_limiter import RateLimitExceeded
from . import logger, config, calibre_db, db, helper, ub, lm, limiter
from . import logger, config, calibre_db, db, helper, ub, lm
from .render_template import render_title_template
log = logger.create()
def register_url_value_preprocessor(kobo):
@kobo.url_value_preprocessor
# pylint: disable=unused-variable
def pop_auth_token(__, values):
g.auth_token = values.pop("auth_token")
def disable_failed_auth_redirect_for_blueprint(bp):
lm.blueprint_login_views[bp.name] = None
def get_auth_token():
if "auth_token" in g:
return g.get("auth_token")
else:
return None
def requires_kobo_auth(f):
@wraps(f)
def inner(*args, **kwargs):
auth_token = get_auth_token()
if auth_token is not None:
user = (
ub.session.query(ub.User)
.join(ub.RemoteAuthToken)
.filter(ub.RemoteAuthToken.auth_token == auth_token).filter(ub.RemoteAuthToken.token_type==1)
.first()
)
if user is not None:
login_user(user)
return f(*args, **kwargs)
log.debug("Received Kobo request without a recognizable auth token.")
return abort(401)
return inner
kobo_auth = Blueprint("kobo_auth", __name__, url_prefix="/kobo_auth")
@ -108,12 +146,12 @@ def generate_auth_token(user_id):
for book in books:
formats = [data.format for data in book.data]
if 'KEPUB' not in formats and config.config_kepubifypath and 'EPUB' in formats:
if not 'KEPUB' in formats and config.config_kepubifypath and 'EPUB' in formats:
helper.convert_book_format(book.id, config.config_calibre_dir, 'EPUB', 'KEPUB', current_user.name)
return render_title_template(
"generate_kobo_auth_url.html",
title=_("Kobo Setup"),
title=_(u"Kobo Setup"),
auth_token=auth_token.auth_token,
warning = warning
)
@ -127,48 +165,3 @@ def delete_auth_token(user_id):
.filter(ub.RemoteAuthToken.token_type==1).delete()
return ub.session_commit()
def disable_failed_auth_redirect_for_blueprint(bp):
lm.blueprint_login_views[bp.name] = None
def get_auth_token():
if "auth_token" in g:
return g.get("auth_token")
else:
return None
def register_url_value_preprocessor(kobo):
@kobo.url_value_preprocessor
# pylint: disable=unused-variable
def pop_auth_token(__, values):
g.auth_token = values.pop("auth_token")
def requires_kobo_auth(f):
@wraps(f)
def inner(*args, **kwargs):
auth_token = get_auth_token()
if auth_token is not None:
try:
limiter.check()
except RateLimitExceeded:
return abort(429)
except (ConnectionError, Exception) as e:
log.error("Connection error to limiter backend: %s", e)
return abort(429)
user = (
ub.session.query(ub.User)
.join(ub.RemoteAuthToken)
.filter(ub.RemoteAuthToken.auth_token == auth_token).filter(ub.RemoteAuthToken.token_type==1)
.first()
)
if user is not None:
login_user(user)
[limiter.limiter.storage.clear(k.key) for k in limiter.current_limits]
return f(*args, **kwargs)
log.debug("Received Kobo request without a recognizable auth token.")
return abort(401)
return inner

@ -42,7 +42,7 @@ logging.addLevelName(logging.CRITICAL, "CRIT")
class _Logger(logging.Logger):
def error_or_exception(self, message, stacklevel=2, *args, **kwargs):
def debug_or_exception(self, message, stacklevel=2, *args, **kwargs):
if sys.version_info > (3, 7):
if is_debug_enabled():
self.exception(message, stacklevel=stacklevel, *args, **kwargs)
@ -54,6 +54,7 @@ class _Logger(logging.Logger):
else:
self.error(message, *args, **kwargs)
def debug_no_auth(self, message, *args, **kwargs):
message = message.strip("\r\n")
if message.startswith("send: AUTH"):
@ -65,7 +66,6 @@ class _Logger(logging.Logger):
def get(name=None):
return logging.getLogger(name)
def create():
parent_frame = inspect.stack(0)[1]
if hasattr(parent_frame, 'frame'):
@ -75,11 +75,9 @@ def create():
parent_module = inspect.getmodule(parent_frame)
return get(parent_module.__name__)
def is_debug_enabled():
return logging.root.level <= logging.DEBUG
def is_info_enabled(logger):
return logging.getLogger(logger).level <= logging.INFO
@ -116,10 +114,10 @@ def get_accesslogfile(log_file):
def setup(log_file, log_level=None):
"""
'''
Configure the logging output.
May be called multiple times.
"""
'''
log_level = log_level or DEFAULT_LOG_LEVEL
logging.setLoggerClass(_Logger)
logging.getLogger(__package__).setLevel(log_level)
@ -129,7 +127,7 @@ def setup(log_file, log_level=None):
# avoid spamming the log with debug messages from libraries
r.setLevel(log_level)
# Otherwise, name gets destroyed on Windows
# Otherwise name get's destroyed on windows
if log_file != LOG_TO_STDERR and log_file != LOG_TO_STDOUT:
log_file = _absolute_log_file(log_file, DEFAULT_LOG_FILE)
@ -150,7 +148,7 @@ def setup(log_file, log_level=None):
else:
try:
file_handler = RotatingFileHandler(log_file, maxBytes=100000, backupCount=2, encoding='utf-8')
except (IOError, PermissionError):
except IOError:
if log_file == DEFAULT_LOG_FILE:
raise
file_handler = RotatingFileHandler(DEFAULT_LOG_FILE, maxBytes=100000, backupCount=2, encoding='utf-8')
@ -161,14 +159,13 @@ def setup(log_file, log_level=None):
r.removeHandler(h)
h.close()
r.addHandler(file_handler)
logging.captureWarnings(True)
return "" if log_file == DEFAULT_LOG_FILE else log_file
def create_access_log(log_file, log_name, formatter):
"""
'''
One-time configuration for the web server's access log.
"""
'''
log_file = _absolute_log_file(log_file, DEFAULT_ACCESS_LOG)
logging.debug("access log: %s", log_file)
@ -177,7 +174,7 @@ def create_access_log(log_file, log_name, formatter):
access_log.setLevel(logging.INFO)
try:
file_handler = RotatingFileHandler(log_file, maxBytes=50000, backupCount=2, encoding='utf-8')
except (IOError, PermissionError):
except IOError:
if log_file == DEFAULT_ACCESS_LOG:
raise
file_handler = RotatingFileHandler(DEFAULT_ACCESS_LOG, maxBytes=50000, backupCount=2, encoding='utf-8')
@ -185,7 +182,8 @@ def create_access_log(log_file, log_name, formatter):
file_handler.setFormatter(formatter)
access_log.addHandler(file_handler)
return access_log, "" if _absolute_log_file(log_file, DEFAULT_ACCESS_LOG) == DEFAULT_ACCESS_LOG else log_file
return access_log, \
"" if _absolute_log_file(log_file, DEFAULT_ACCESS_LOG) == DEFAULT_ACCESS_LOG else log_file
# Enable logging of smtp lib debug output

@ -1,81 +0,0 @@
# -*- coding: utf-8 -*-
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
# Copyright (C) 2012-2022 OzzieIsaacs
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import sys
from . import create_app, limiter
from .jinjia import jinjia
from .remotelogin import remotelogin
from flask import request
def request_username():
return request.authorization.username
def main():
app = create_app()
from .web import web
from .opds import opds
from .admin import admi
from .gdrive import gdrive
from .editbooks import editbook
from .about import about
from .search import search
from .search_metadata import meta
from .shelf import shelf
from .tasks_status import tasks
from .error_handler import init_errorhandler
try:
from .kobo import kobo, get_kobo_activated
from .kobo_auth import kobo_auth
from flask_limiter.util import get_remote_address
kobo_available = get_kobo_activated()
except (ImportError, AttributeError): # Catch also error for not installed flask-WTF (missing csrf decorator)
kobo_available = False
try:
from .oauth_bb import oauth
oauth_available = True
except ImportError:
oauth_available = False
from . import web_server
init_errorhandler()
app.register_blueprint(search)
app.register_blueprint(tasks)
app.register_blueprint(web)
app.register_blueprint(opds)
limiter.limit("3/minute",key_func=request_username)(opds)
app.register_blueprint(jinjia)
app.register_blueprint(about)
app.register_blueprint(shelf)
app.register_blueprint(admi)
app.register_blueprint(remotelogin)
app.register_blueprint(meta)
app.register_blueprint(gdrive)
app.register_blueprint(editbook)
if kobo_available:
app.register_blueprint(kobo)
app.register_blueprint(kobo_auth)
limiter.limit("3/minute", key_func=get_remote_address)(kobo)
if oauth_available:
app.register_blueprint(oauth)
success = web_server.start()
sys.exit(0 if success else 1)

@ -19,22 +19,14 @@
import concurrent.futures
import requests
from bs4 import BeautifulSoup as BS # requirement
from typing import List, Optional
try:
import cchardet #optional for better speed
except ImportError:
pass
from cps import logger
from cps.services.Metadata import MetaRecord, MetaSourceInfo, Metadata
import cps.logger as logger
#from time import time
from operator import itemgetter
log = logger.create()
log = logger.create()
class Amazon(Metadata):
__name__ = "Amazon"
@ -54,20 +46,16 @@ class Amazon(Metadata):
def search(
self, query: str, generic_cover: str = "", locale: str = "en"
) -> Optional[List[MetaRecord]]:
):
#timer=time()
def inner(link, index) -> [dict, int]:
with self.session as session:
try:
r = session.get(f"https://www.amazon.com/{link}")
r.raise_for_status()
except Exception as ex:
log.warning(ex)
return None
def inner(link,index)->[dict,int]:
with self.session as session:
r = session.get(f"https://www.amazon.com/{link}")
r.raise_for_status()
long_soup = BS(r.text, "lxml") #~4sec :/
soup2 = long_soup.find("div", attrs={"cel_widget_id": "dpx-books-ppd_csm_instrumentation_wrapper"})
if soup2 is None:
return None
return
try:
match = MetaRecord(
title = "",
@ -77,7 +65,7 @@ class Amazon(Metadata):
description="Amazon Books",
link="https://amazon.com/"
),
url = f"https://www.amazon.com{link}",
url = f"https://www.amazon.com/{link}",
#the more searches the slower, these are too hard to find in reasonable time or might not even exist
publisher= "", # very unreliable
publishedDate= "", # very unreliable
@ -98,7 +86,7 @@ class Amazon(Metadata):
try:
match.authors = [next(
filter(lambda i: i != " " and i != "\n" and not i.startswith("{"),
x.findAll(string=True))).strip()
x.findAll(text=True))).strip()
for x in soup2.findAll("span", attrs={"class": "author"})]
except (AttributeError, TypeError, StopIteration):
match.authors = ""
@ -114,28 +102,21 @@ class Amazon(Metadata):
match.cover = ""
return match, index
except Exception as e:
log.error_or_exception(e)
return None
print(e)
return
val = list()
if self.active:
try:
results = self.session.get(
f"https://www.amazon.com/s?k={query.replace(' ', '+')}&i=digital-text&sprefix={query.replace(' ', '+')}"
f"%2Cdigital-text&ref=nb_sb_noss",
headers=self.headers)
results.raise_for_status()
except requests.exceptions.HTTPError as e:
log.error_or_exception(e)
return []
except Exception as e:
log.warning(e)
return []
results = self.session.get(
f"https://www.amazon.com/s?k={query.replace(' ', '+')}&i=digital-text&sprefix={query.replace(' ', '+')}"
f"%2Cdigital-text&ref=nb_sb_noss",
headers=self.headers)
results.raise_for_status()
soup = BS(results.text, 'html.parser')
links_list = [next(filter(lambda i: "digital-text" in i["href"], x.findAll("a")))["href"] for x in
soup.findAll("div", attrs={"data-component-type": "s-search-result"})]
with concurrent.futures.ThreadPoolExecutor(max_workers=5) as executor:
fut = {executor.submit(inner, link, index) for index, link in enumerate(links_list[:5])}
val = list(map(lambda x : x.result() ,concurrent.futures.as_completed(fut)))
result = list(filter(lambda x: x, val))
val=list(map(lambda x : x.result() ,concurrent.futures.as_completed(fut)))
result=list(filter(lambda x: x, val))
return [x[0] for x in sorted(result, key=itemgetter(1))] #sort by amazons listing order for best relevance

@ -21,11 +21,8 @@ from typing import Dict, List, Optional
from urllib.parse import quote
import requests
from cps import logger
from cps.services.Metadata import MetaRecord, MetaSourceInfo, Metadata
log = logger.create()
class ComicVine(Metadata):
__name__ = "ComicVine"
@ -49,15 +46,10 @@ class ComicVine(Metadata):
if title_tokens:
tokens = [quote(t.encode("utf-8")) for t in title_tokens]
query = "%20".join(tokens)
try:
result = requests.get(
f"{ComicVine.BASE_URL}{query}{ComicVine.QUERY_PARAMS}",
headers=ComicVine.HEADERS,
)
result.raise_for_status()
except Exception as e:
log.warning(e)
return None
result = requests.get(
f"{ComicVine.BASE_URL}{query}{ComicVine.QUERY_PARAMS}",
headers=ComicVine.HEADERS,
)
for result in result.json()["results"]:
match = self._parse_search_result(
result=result, generic_cover=generic_cover, locale=locale

@ -1,259 +0,0 @@
# -*- coding: utf-8 -*-
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
# Copyright (C) 2022 xlivevil
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import re
from concurrent import futures
from typing import List, Optional
import requests
from html2text import HTML2Text
from lxml import etree
from cps import logger
from cps.services.Metadata import Metadata, MetaRecord, MetaSourceInfo
log = logger.create()
def html2text(html: str) -> str:
h2t = HTML2Text()
h2t.body_width = 0
h2t.single_line_break = True
h2t.emphasis_mark = "*"
return h2t.handle(html)
class Douban(Metadata):
__name__ = "豆瓣"
__id__ = "douban"
DESCRIPTION = "豆瓣"
META_URL = "https://book.douban.com/"
SEARCH_JSON_URL = "https://www.douban.com/j/search"
SEARCH_URL = "https://www.douban.com/search"
ID_PATTERN = re.compile(r"sid: (?P<id>\d+),")
AUTHORS_PATTERN = re.compile(r"作者|译者")
PUBLISHER_PATTERN = re.compile(r"出版社")
SUBTITLE_PATTERN = re.compile(r"副标题")
PUBLISHED_DATE_PATTERN = re.compile(r"出版年")
SERIES_PATTERN = re.compile(r"丛书")
IDENTIFIERS_PATTERN = re.compile(r"ISBN|统一书号")
CRITERIA_PATTERN = re.compile("criteria = '(.+)'")
TITTLE_XPATH = "//span[@property='v:itemreviewed']"
COVER_XPATH = "//a[@class='nbg']"
INFO_XPATH = "//*[@id='info']//span[@class='pl']"
TAGS_XPATH = "//a[contains(@class, 'tag')]"
DESCRIPTION_XPATH = "//div[@id='link-report']//div[@class='intro']"
RATING_XPATH = "//div[@class='rating_self clearfix']/strong"
session = requests.Session()
session.headers = {
'user-agent':
'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/98.0.4758.102 Safari/537.36 Edg/98.0.1108.56',
}
def search(self,
query: str,
generic_cover: str = "",
locale: str = "en") -> List[MetaRecord]:
val = []
if self.active:
log.debug(f"start searching {query} on douban")
if title_tokens := list(
self.get_title_tokens(query, strip_joiners=False)):
query = "+".join(title_tokens)
book_id_list = self._get_book_id_list_from_html(query)
if not book_id_list:
log.debug("No search results in Douban")
return []
with futures.ThreadPoolExecutor(
max_workers=5, thread_name_prefix='douban') as executor:
fut = [
executor.submit(self._parse_single_book, book_id,
generic_cover) for book_id in book_id_list
]
val = [
future.result() for future in futures.as_completed(fut)
if future.result()
]
return val
def _get_book_id_list_from_html(self, query: str) -> List[str]:
try:
r = self.session.get(self.SEARCH_URL,
params={
"cat": 1001,
"q": query
})
r.raise_for_status()
except Exception as e:
log.warning(e)
return []
html = etree.HTML(r.content.decode("utf8"))
result_list = html.xpath(self.COVER_XPATH)
return [
self.ID_PATTERN.search(item.get("onclick")).group("id")
for item in result_list[:10]
if self.ID_PATTERN.search(item.get("onclick"))
]
def _get_book_id_list_from_json(self, query: str) -> List[str]:
try:
r = self.session.get(self.SEARCH_JSON_URL,
params={
"cat": 1001,
"q": query
})
r.raise_for_status()
except Exception as e:
log.warning(e)
return []
results = r.json()
if results["total"] == 0:
return []
return [
self.ID_PATTERN.search(item).group("id")
for item in results["items"][:10] if self.ID_PATTERN.search(item)
]
def _parse_single_book(self,
id: str,
generic_cover: str = "") -> Optional[MetaRecord]:
url = f"https://book.douban.com/subject/{id}/"
log.debug(f"start parsing {url}")
try:
r = self.session.get(url)
r.raise_for_status()
except Exception as e:
log.warning(e)
return None
match = MetaRecord(
id=id,
title="",
authors=[],
url=url,
source=MetaSourceInfo(
id=self.__id__,
description=self.DESCRIPTION,
link=self.META_URL,
),
)
decode_content = r.content.decode("utf8")
html = etree.HTML(decode_content)
match.title = html.xpath(self.TITTLE_XPATH)[0].text
match.cover = html.xpath(
self.COVER_XPATH)[0].attrib["href"] or generic_cover
try:
rating_num = float(html.xpath(self.RATING_XPATH)[0].text.strip())
except Exception:
rating_num = 0
match.rating = int(-1 * rating_num // 2 * -1) if rating_num else 0
tag_elements = html.xpath(self.TAGS_XPATH)
if len(tag_elements):
match.tags = [tag_element.text for tag_element in tag_elements]
else:
match.tags = self._get_tags(decode_content)
description_element = html.xpath(self.DESCRIPTION_XPATH)
if len(description_element):
match.description = html2text(
etree.tostring(description_element[-1]).decode("utf8"))
info = html.xpath(self.INFO_XPATH)
for element in info:
text = element.text
if self.AUTHORS_PATTERN.search(text):
next_element = element.getnext()
while next_element is not None and next_element.tag != "br":
match.authors.append(next_element.text)
next_element = next_element.getnext()
elif self.PUBLISHER_PATTERN.search(text):
if publisher := element.tail.strip():
match.publisher = publisher
else:
match.publisher = element.getnext().text
elif self.SUBTITLE_PATTERN.search(text):
match.title = f'{match.title}:{element.tail.strip()}'
elif self.PUBLISHED_DATE_PATTERN.search(text):
match.publishedDate = self._clean_date(element.tail.strip())
elif self.SERIES_PATTERN.search(text):
match.series = element.getnext().text
elif i_type := self.IDENTIFIERS_PATTERN.search(text):
match.identifiers[i_type.group()] = element.tail.strip()
return match
def _clean_date(self, date: str) -> str:
"""
Clean up the date string to be in the format YYYY-MM-DD
Examples of possible patterns:
'2014-7-16', '1988年4月', '1995-04', '2021-8', '2020-12-1', '1996年',
'1972', '2004/11/01', '1959年3月北京第1版第1印'
"""
year = date[:4]
moon = "01"
day = "01"
if len(date) > 5:
digit = []
ls = []
for i in range(5, len(date)):
if date[i].isdigit():
digit.append(date[i])
elif digit:
ls.append("".join(digit) if len(digit) ==
2 else f"0{digit[0]}")
digit = []
if digit:
ls.append("".join(digit) if len(digit) ==
2 else f"0{digit[0]}")
moon = ls[0]
if len(ls) > 1:
day = ls[1]
return f"{year}-{moon}-{day}"
def _get_tags(self, text: str) -> List[str]:
tags = []
if criteria := self.CRITERIA_PATTERN.search(text):
tags.extend(
item.replace('7:', '') for item in criteria.group().split('|')
if item.startswith('7:'))
return tags

@ -19,16 +19,12 @@
# Google Books api document: https://developers.google.com/books/docs/v1/using
from typing import Dict, List, Optional
from urllib.parse import quote
from datetime import datetime
import requests
from cps import logger
from cps.isoLanguages import get_lang3, get_language_name
from cps.services.Metadata import MetaRecord, MetaSourceInfo, Metadata
log = logger.create()
class Google(Metadata):
__name__ = "Google"
@ -49,12 +45,7 @@ class Google(Metadata):
if title_tokens:
tokens = [quote(t.encode("utf-8")) for t in title_tokens]
query = "+".join(tokens)
try:
results = requests.get(Google.SEARCH_URL + query)
results.raise_for_status()
except Exception as e:
log.warning(e)
return None
results = requests.get(Google.SEARCH_URL + query)
for result in results.json().get("items", []):
val.append(
self._parse_search_result(
@ -82,11 +73,7 @@ class Google(Metadata):
match.description = result["volumeInfo"].get("description", "")
match.languages = self._parse_languages(result=result, locale=locale)
match.publisher = result["volumeInfo"].get("publisher", "")
try:
datetime.strptime(result["volumeInfo"].get("publishedDate", ""), "%Y-%m-%d")
match.publishedDate = result["volumeInfo"].get("publishedDate", "")
except ValueError:
match.publishedDate = ""
match.publishedDate = result["volumeInfo"].get("publishedDate", "")
match.rating = result["volumeInfo"].get("averageRating", 0)
match.series, match.series_index = "", 1
match.tags = result["volumeInfo"].get("categories", [])
@ -108,13 +95,6 @@ class Google(Metadata):
def _parse_cover(result: Dict, generic_cover: str) -> str:
if result["volumeInfo"].get("imageLinks"):
cover_url = result["volumeInfo"]["imageLinks"]["thumbnail"]
# strip curl in cover
cover_url = cover_url.replace("&edge=curl", "")
# request 800x900 cover image (higher resolution)
cover_url += "&fife=w800-h900"
return cover_url.replace("http://", "https://")
return generic_cover

@ -27,12 +27,9 @@ from html2text import HTML2Text
from lxml.html import HtmlElement, fromstring, tostring
from markdown2 import Markdown
from cps import logger
from cps.isoLanguages import get_language_name
from cps.services.Metadata import MetaRecord, MetaSourceInfo, Metadata
log = logger.create()
SYMBOLS_TO_TRANSLATE = (
"öÖüÜóÓőŐúÚéÉáÁűŰíÍąĄćĆęĘłŁńŃóÓśŚźŹżŻ",
"oOuUoOoOuUeEaAuUiIaAcCeElLnNoOsSzZzZ",
@ -97,14 +94,12 @@ class LubimyCzytac(Metadata):
LANGUAGES = f"{CONTAINER}//dt[contains(text(),'Język:')]{SIBLINGS}/text()"
DESCRIPTION = f"{CONTAINER}//div[@class='collapse-content']"
SERIES = f"{CONTAINER}//span/a[contains(@href,'/cykl/')]/text()"
TRANSLATOR = f"{CONTAINER}//dt[contains(text(),'Tłumacz:')]{SIBLINGS}/a/text()"
DETAILS = "//div[@id='book-details']"
PUBLISH_DATE = "//dt[contains(@title,'Data pierwszego wydania"
FIRST_PUBLISH_DATE = f"{DETAILS}{PUBLISH_DATE} oryginalnego')]{SIBLINGS}[1]/text()"
FIRST_PUBLISH_DATE_PL = f"{DETAILS}{PUBLISH_DATE} polskiego')]{SIBLINGS}[1]/text()"
TAGS = "//a[contains(@href,'/ksiazki/t/')]/text()" # "//nav[@aria-label='breadcrumbs']//a[contains(@href,'/ksiazki/k/')]/span/text()"
TAGS = "//nav[@aria-label='breadcrumb']//a[contains(@href,'/ksiazki/k/')]/text()"
RATING = "//meta[@property='books:rating:value']/@content"
COVER = "//meta[@property='og:image']/@content"
@ -117,12 +112,7 @@ class LubimyCzytac(Metadata):
self, query: str, generic_cover: str = "", locale: str = "en"
) -> Optional[List[MetaRecord]]:
if self.active:
try:
result = requests.get(self._prepare_query(title=query))
result.raise_for_status()
except Exception as e:
log.warning(e)
return None
result = requests.get(self._prepare_query(title=query))
root = fromstring(result.text)
lc_parser = LubimyCzytacParser(root=root, metadata=self)
matches = lc_parser.parse_search_results()
@ -160,7 +150,6 @@ class LubimyCzytac(Metadata):
class LubimyCzytacParser:
PAGES_TEMPLATE = "<p id='strony'>Książka ma {0} stron(y).</p>"
TRANSLATOR_TEMPLATE = "<p id='translator'>Tłumacz: {0}</p>"
PUBLISH_DATE_TEMPLATE = "<p id='pierwsze_wydanie'>Data pierwszego wydania: {0}</p>"
PUBLISH_DATE_PL_TEMPLATE = (
"<p id='pierwsze_wydanie'>Data pierwszego wydania w Polsce: {0}</p>"
@ -211,12 +200,7 @@ class LubimyCzytacParser:
def parse_single_book(
self, match: MetaRecord, generic_cover: str, locale: str
) -> MetaRecord:
try:
response = requests.get(match.url)
response.raise_for_status()
except Exception as e:
log.warning(e)
return None
response = requests.get(match.url)
self.root = fromstring(response.text)
match.cover = self._parse_cover(generic_cover=generic_cover)
match.description = self._parse_description()
@ -349,9 +333,5 @@ class LubimyCzytacParser:
description += LubimyCzytacParser.PUBLISH_DATE_PL_TEMPLATE.format(
first_publish_date_pl.strftime("%d.%m.%Y")
)
translator = self._parse_xpath_node(xpath=LubimyCzytac.TRANSLATOR)
if translator:
description += LubimyCzytacParser.TRANSLATOR_TEMPLATE.format(translator)
return description

@ -28,12 +28,8 @@ try:
except FakeUserAgentError:
raise ImportError("No module named 'scholarly'")
from cps import logger
from cps.services.Metadata import MetaRecord, MetaSourceInfo, Metadata
log = logger.create()
class scholar(Metadata):
__name__ = "Google Scholar"
__id__ = "googlescholar"
@ -48,13 +44,7 @@ class scholar(Metadata):
if title_tokens:
tokens = [quote(t.encode("utf-8")) for t in title_tokens]
query = " ".join(tokens)
try:
scholarly.set_timeout(20)
scholarly.set_retries(2)
scholar_gen = itertools.islice(scholarly.search_pubs(query), 10)
except Exception as e:
log.warning(e)
return list()
scholar_gen = itertools.islice(scholarly.search_pubs(query), 10)
for result in scholar_gen:
match = self._parse_search_result(
result=result, generic_cover="", locale=locale

@ -19,12 +19,18 @@
from flask import session
try:
from flask_dance.consumer.storage.sqla import SQLAlchemyStorage as SQLAlchemyBackend
from flask_dance.consumer.storage.sqla import first, _get_real_user
from flask_dance.consumer.backend.sqla import SQLAlchemyBackend, first, _get_real_user
from sqlalchemy.orm.exc import NoResultFound
backend_resultcode = True # prevent storing values with this resultcode
backend_resultcode = False # prevent storing values with this resultcode
except ImportError:
pass
# fails on flask-dance >1.3, due to renaming
try:
from flask_dance.consumer.storage.sqla import SQLAlchemyStorage as SQLAlchemyBackend
from flask_dance.consumer.storage.sqla import first, _get_real_user
from sqlalchemy.orm.exc import NoResultFound
backend_resultcode = True # prevent storing values with this resultcode
except ImportError:
pass
class OAuthBackend(SQLAlchemyBackend):

@ -74,7 +74,7 @@ def register_user_with_oauth(user=None):
if len(all_oauth.keys()) == 0:
return
if user is None:
flash(_("Register with %(provider)s", provider=", ".join(list(all_oauth.values()))), category="success")
flash(_(u"Register with %(provider)s", provider=", ".join(list(all_oauth.values()))), category="success")
else:
for oauth_key in all_oauth.keys():
# Find this OAuth token in the database, or create it
@ -134,8 +134,8 @@ def bind_oauth_or_register(provider_id, provider_user_id, redirect_url, provider
# already bind with user, just login
if oauth_entry.user:
login_user(oauth_entry.user)
log.debug("You are now logged in as: '%s'", oauth_entry.user.name)
flash(_("Success! You are now logged in as: %(nickname)s", nickname= oauth_entry.user.name),
log.debug(u"You are now logged in as: '%s'", oauth_entry.user.name)
flash(_(u"you are now logged in as: '%(nickname)s'", nickname= oauth_entry.user.name),
category="success")
return redirect(url_for('web.index'))
else:
@ -145,21 +145,21 @@ def bind_oauth_or_register(provider_id, provider_user_id, redirect_url, provider
try:
ub.session.add(oauth_entry)
ub.session.commit()
flash(_("Link to %(oauth)s Succeeded", oauth=provider_name), category="success")
flash(_(u"Link to %(oauth)s Succeeded", oauth=provider_name), category="success")
log.info("Link to {} Succeeded".format(provider_name))
return redirect(url_for('web.profile'))
except Exception as ex:
log.error_or_exception(ex)
log.debug_or_exception(ex)
ub.session.rollback()
else:
flash(_("Login failed, No User Linked With OAuth Account"), category="error")
flash(_(u"Login failed, No User Linked With OAuth Account"), category="error")
log.info('Login failed, No User Linked With OAuth Account')
return redirect(url_for('web.login'))
# return redirect(url_for('web.login'))
# if config.config_public_reg:
# return redirect(url_for('web.register'))
# else:
# flash(_("Public registration is not enabled"), category="error")
# flash(_(u"Public registration is not enabled"), category="error")
# return redirect(url_for(redirect_url))
except (NoResultFound, AttributeError):
return redirect(url_for(redirect_url))
@ -194,15 +194,15 @@ def unlink_oauth(provider):
ub.session.delete(oauth_entry)
ub.session.commit()
logout_oauth_user()
flash(_("Unlink to %(oauth)s Succeeded", oauth=oauth_check[provider]), category="success")
flash(_(u"Unlink to %(oauth)s Succeeded", oauth=oauth_check[provider]), category="success")
log.info("Unlink to {} Succeeded".format(oauth_check[provider]))
except Exception as ex:
log.error_or_exception(ex)
log.debug_or_exception(ex)
ub.session.rollback()
flash(_("Unlink to %(oauth)s Failed", oauth=oauth_check[provider]), category="error")
flash(_(u"Unlink to %(oauth)s Failed", oauth=oauth_check[provider]), category="error")
except NoResultFound:
log.warning("oauth %s for user %d not found", provider, current_user.id)
flash(_("Not Linked to %(oauth)s", oauth=provider), category="error")
flash(_(u"Not Linked to %(oauth)s", oauth=provider), category="error")
return redirect(url_for('web.profile'))
def generate_oauth_blueprints():
@ -258,13 +258,13 @@ if ub.oauth_support:
@oauth_authorized.connect_via(oauthblueprints[0]['blueprint'])
def github_logged_in(blueprint, token):
if not token:
flash(_("Failed to log in with GitHub."), category="error")
flash(_(u"Failed to log in with GitHub."), category="error")
log.error("Failed to log in with GitHub")
return False
resp = blueprint.session.get("/user")
if not resp.ok:
flash(_("Failed to fetch user info from GitHub."), category="error")
flash(_(u"Failed to fetch user info from GitHub."), category="error")
log.error("Failed to fetch user info from GitHub")
return False
@ -276,13 +276,13 @@ if ub.oauth_support:
@oauth_authorized.connect_via(oauthblueprints[1]['blueprint'])
def google_logged_in(blueprint, token):
if not token:
flash(_("Failed to log in with Google."), category="error")
flash(_(u"Failed to log in with Google."), category="error")
log.error("Failed to log in with Google")
return False
resp = blueprint.session.get("/oauth2/v2/userinfo")
if not resp.ok:
flash(_("Failed to fetch user info from Google."), category="error")
flash(_(u"Failed to fetch user info from Google."), category="error")
log.error("Failed to fetch user info from Google")
return False
@ -295,8 +295,8 @@ if ub.oauth_support:
@oauth_error.connect_via(oauthblueprints[0]['blueprint'])
def github_error(blueprint, error, error_description=None, error_uri=None):
msg = (
"OAuth error from {name}! "
"error={error} description={description} uri={uri}"
u"OAuth error from {name}! "
u"error={error} description={description} uri={uri}"
).format(
name=blueprint.name,
error=error,
@ -308,8 +308,8 @@ if ub.oauth_support:
@oauth_error.connect_via(oauthblueprints[1]['blueprint'])
def google_error(blueprint, error, error_description=None, error_uri=None):
msg = (
"OAuth error from {name}! "
"error={error} description={description} uri={uri}"
u"OAuth error from {name}! "
u"error={error} description={description} uri={uri}"
).format(
name=blueprint.name,
error=error,
@ -329,10 +329,10 @@ def github_login():
if account_info.ok:
account_info_json = account_info.json()
return bind_oauth_or_register(oauthblueprints[0]['id'], account_info_json['id'], 'github.login', 'github')
flash(_("GitHub Oauth error, please retry later."), category="error")
flash(_(u"GitHub Oauth error, please retry later."), category="error")
log.error("GitHub Oauth error, please retry later")
except (InvalidGrantError, TokenExpiredError) as e:
flash(_("GitHub Oauth error: {}").format(e), category="error")
flash(_(u"GitHub Oauth error: {}").format(e), category="error")
log.error(e)
return redirect(url_for('web.login'))
@ -353,10 +353,10 @@ def google_login():
if resp.ok:
account_info_json = resp.json()
return bind_oauth_or_register(oauthblueprints[1]['id'], account_info_json['id'], 'google.login', 'google')
flash(_("Google Oauth error, please retry later."), category="error")
flash(_(u"Google Oauth error, please retry later."), category="error")
log.error("Google Oauth error, please retry later")
except (InvalidGrantError, TokenExpiredError) as e:
flash(_("Google Oauth error: {}").format(e), category="error")
flash(_(u"Google Oauth error: {}").format(e), category="error")
log.error(e)
return redirect(url_for('web.login'))

@ -21,28 +21,53 @@
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import datetime
import json
from urllib.parse import unquote_plus
from functools import wraps
from flask import Blueprint, request, render_template, make_response, abort, Response, g
from flask import Blueprint, request, render_template, Response, g, make_response, abort
from flask_login import current_user
from flask_babel import get_locale
from flask_babel import gettext as _
from sqlalchemy.sql.expression import func, text, or_, and_, true
from sqlalchemy.exc import InvalidRequestError, OperationalError
from . import logger, config, db, calibre_db, ub, isoLanguages, constants
from .usermanagement import requires_basic_auth_if_no_ano
from werkzeug.security import check_password_hash
from tornado.httputil import HTTPServerRequest
from . import constants, logger, config, db, calibre_db, ub, services, get_locale, isoLanguages
from .helper import get_download_link, get_book_cover
from .pagination import Pagination
from .web import render_read_books
from .usermanagement import load_user_from_request
from flask_babel import gettext as _
opds = Blueprint('opds', __name__)
log = logger.create()
def requires_basic_auth_if_no_ano(f):
@wraps(f)
def decorated(*args, **kwargs):
auth = request.authorization
if config.config_anonbrowse != 1:
if not auth or auth.type != 'basic' or not check_auth(auth.username, auth.password):
return authenticate()
return f(*args, **kwargs)
if config.config_login_type == constants.LOGIN_LDAP and services.ldap and config.config_anonbrowse != 1:
return services.ldap.basic_auth_required(f)
return decorated
class FeedObject:
def __init__(self, rating_id, rating_name):
self.rating_id = rating_id
self.rating_name = rating_name
@property
def id(self):
return self.rating_id
@property
def name(self):
return self.rating_name
@opds.route("/opds/")
@opds.route("/opds")
@requires_basic_auth_if_no_ano
@ -56,12 +81,12 @@ def feed_osd():
return render_xml_template('osd.xml', lang='en-EN')
# @opds.route("/opds/search", defaults={'query': ""})
@opds.route("/opds/search", defaults={'query': ""})
@opds.route("/opds/search/<path:query>")
@requires_basic_auth_if_no_ano
def feed_cc_search(query):
# Handle strange query from Libera Reader with + instead of spaces
plus_query = unquote_plus(request.environ['RAW_URI'].split('/opds/search/')[1]).strip()
plus_query = unquote_plus(request.base_url.split('/opds/search/')[1]).strip()
return feed_search(plus_query)
@ -74,7 +99,26 @@ def feed_normal_search():
@opds.route("/opds/books")
@requires_basic_auth_if_no_ano
def feed_booksindex():
return render_element_index(db.Books.sort, None, 'opds.feed_letter_books')
shift = 0
off = int(request.args.get("offset") or 0)
entries = calibre_db.session.query(func.upper(func.substr(db.Books.sort, 1, 1)).label('id'))\
.filter(calibre_db.common_filters()).group_by(func.upper(func.substr(db.Books.sort, 1, 1))).all()
elements = []
if off == 0:
elements.append({'id': "00", 'name':_("All")})
shift = 1
for entry in entries[
off + shift - 1:
int(off + int(config.config_books_per_page) - shift)]:
elements.append({'id': entry.id, 'name': entry.id})
pagination = Pagination((int(off) / (int(config.config_books_per_page)) + 1), config.config_books_per_page,
len(entries) + 1)
return render_xml_template('feed.xml',
letterelements=elements,
folder='opds.feed_letter_books',
pagination=pagination)
@opds.route("/opds/books/letter/<book_id>")
@ -85,8 +129,7 @@ def feed_letter_books(book_id):
entries, __, pagination = calibre_db.fill_indexpage((int(off) / (int(config.config_books_per_page)) + 1), 0,
db.Books,
letter,
[db.Books.sort],
True, config.config_read_column)
[db.Books.sort])
return render_xml_template('feed.xml', entries=entries, pagination=pagination)
@ -94,22 +137,17 @@ def feed_letter_books(book_id):
@opds.route("/opds/new")
@requires_basic_auth_if_no_ano
def feed_new():
if not current_user.check_visibility(constants.SIDEBAR_RECENT):
abort(404)
off = request.args.get("offset") or 0
entries, __, pagination = calibre_db.fill_indexpage((int(off) / (int(config.config_books_per_page)) + 1), 0,
db.Books, True, [db.Books.timestamp.desc()],
True, config.config_read_column)
db.Books, True, [db.Books.timestamp.desc()])
return render_xml_template('feed.xml', entries=entries, pagination=pagination)
@opds.route("/opds/discover")
@requires_basic_auth_if_no_ano
def feed_discover():
if not current_user.check_visibility(constants.SIDEBAR_RANDOM):
abort(404)
query = calibre_db.generate_linked_query(config.config_read_column, db.Books)
entries = query.filter(calibre_db.common_filters()).order_by(func.random()).limit(config.config_books_per_page)
entries = calibre_db.session.query(db.Books).filter(calibre_db.common_filters()).order_by(func.random())\
.limit(config.config_books_per_page)
pagination = Pagination(1, config.config_books_per_page, int(config.config_books_per_page))
return render_xml_template('feed.xml', entries=entries, pagination=pagination)
@ -117,53 +155,64 @@ def feed_discover():
@opds.route("/opds/rated")
@requires_basic_auth_if_no_ano
def feed_best_rated():
if not current_user.check_visibility(constants.SIDEBAR_BEST_RATED):
abort(404)
off = request.args.get("offset") or 0
entries, __, pagination = calibre_db.fill_indexpage((int(off) / (int(config.config_books_per_page)) + 1), 0,
db.Books, db.Books.ratings.any(db.Ratings.rating > 9),
[db.Books.timestamp.desc()],
True, config.config_read_column)
[db.Books.timestamp.desc()])
return render_xml_template('feed.xml', entries=entries, pagination=pagination)
@opds.route("/opds/hot")
@requires_basic_auth_if_no_ano
def feed_hot():
if not current_user.check_visibility(constants.SIDEBAR_HOT):
abort(404)
off = request.args.get("offset") or 0
all_books = ub.session.query(ub.Downloads, func.count(ub.Downloads.book_id)).order_by(
func.count(ub.Downloads.book_id).desc()).group_by(ub.Downloads.book_id)
hot_books = all_books.offset(off).limit(config.config_books_per_page)
entries = list()
for book in hot_books:
query = calibre_db.generate_linked_query(config.config_read_column, db.Books)
download_book = query.filter(calibre_db.common_filters()).filter(
book.Downloads.book_id == db.Books.id).first()
if download_book:
entries.append(download_book)
downloadBook = calibre_db.get_book(book.Downloads.book_id)
if downloadBook:
entries.append(
calibre_db.get_filtered_book(book.Downloads.book_id)
)
else:
ub.delete_download(book.Downloads.book_id)
num_books = entries.__len__()
numBooks = entries.__len__()
pagination = Pagination((int(off) / (int(config.config_books_per_page)) + 1),
config.config_books_per_page, num_books)
config.config_books_per_page, numBooks)
return render_xml_template('feed.xml', entries=entries, pagination=pagination)
@opds.route("/opds/author")
@requires_basic_auth_if_no_ano
def feed_authorindex():
if not current_user.check_visibility(constants.SIDEBAR_AUTHOR):
abort(404)
return render_element_index(db.Authors.sort, db.books_authors_link, 'opds.feed_letter_author')
shift = 0
off = int(request.args.get("offset") or 0)
entries = calibre_db.session.query(func.upper(func.substr(db.Authors.sort, 1, 1)).label('id'))\
.join(db.books_authors_link).join(db.Books).filter(calibre_db.common_filters())\
.group_by(func.upper(func.substr(db.Authors.sort, 1, 1))).all()
elements = []
if off == 0:
elements.append({'id': "00", 'name':_("All")})
shift = 1
for entry in entries[
off + shift - 1:
int(off + int(config.config_books_per_page) - shift)]:
elements.append({'id': entry.id, 'name': entry.id})
pagination = Pagination((int(off) / (int(config.config_books_per_page)) + 1), config.config_books_per_page,
len(entries) + 1)
return render_xml_template('feed.xml',
letterelements=elements,
folder='opds.feed_letter_author',
pagination=pagination)
@opds.route("/opds/author/letter/<book_id>")
@requires_basic_auth_if_no_ano
def feed_letter_author(book_id):
if not current_user.check_visibility(constants.SIDEBAR_AUTHOR):
abort(404)
off = request.args.get("offset") or 0
letter = true() if book_id == "00" else func.upper(db.Authors.sort).startswith(book_id)
entries = calibre_db.session.query(db.Authors).join(db.books_authors_link).join(db.Books)\
@ -179,14 +228,17 @@ def feed_letter_author(book_id):
@opds.route("/opds/author/<int:book_id>")
@requires_basic_auth_if_no_ano
def feed_author(book_id):
return render_xml_dataset(db.Authors, book_id)
off = request.args.get("offset") or 0
entries, __, pagination = calibre_db.fill_indexpage((int(off) / (int(config.config_books_per_page)) + 1), 0,
db.Books,
db.Books.authors.any(db.Authors.id == book_id),
[db.Books.timestamp.desc()])
return render_xml_template('feed.xml', entries=entries, pagination=pagination)
@opds.route("/opds/publisher")
@requires_basic_auth_if_no_ano
def feed_publisherindex():
if not current_user.check_visibility(constants.SIDEBAR_PUBLISHER):
abort(404)
off = request.args.get("offset") or 0
entries = calibre_db.session.query(db.Publishers)\
.join(db.books_publishers_link)\
@ -202,22 +254,41 @@ def feed_publisherindex():
@opds.route("/opds/publisher/<int:book_id>")
@requires_basic_auth_if_no_ano
def feed_publisher(book_id):
return render_xml_dataset(db.Publishers, book_id)
off = request.args.get("offset") or 0
entries, __, pagination = calibre_db.fill_indexpage((int(off) / (int(config.config_books_per_page)) + 1), 0,
db.Books,
db.Books.publishers.any(db.Publishers.id == book_id),
[db.Books.timestamp.desc()])
return render_xml_template('feed.xml', entries=entries, pagination=pagination)
@opds.route("/opds/category")
@requires_basic_auth_if_no_ano
def feed_categoryindex():
if not current_user.check_visibility(constants.SIDEBAR_CATEGORY):
abort(404)
return render_element_index(db.Tags.name, db.books_tags_link, 'opds.feed_letter_category')
shift = 0
off = int(request.args.get("offset") or 0)
entries = calibre_db.session.query(func.upper(func.substr(db.Tags.name, 1, 1)).label('id'))\
.join(db.books_tags_link).join(db.Books).filter(calibre_db.common_filters())\
.group_by(func.upper(func.substr(db.Tags.name, 1, 1))).all()
elements = []
if off == 0:
elements.append({'id': "00", 'name':_("All")})
shift = 1
for entry in entries[
off + shift - 1:
int(off + int(config.config_books_per_page) - shift)]:
elements.append({'id': entry.id, 'name': entry.id})
pagination = Pagination((int(off) / (int(config.config_books_per_page)) + 1), config.config_books_per_page,
len(entries) + 1)
return render_xml_template('feed.xml',
letterelements=elements,
folder='opds.feed_letter_category',
pagination=pagination)
@opds.route("/opds/category/letter/<book_id>")
@requires_basic_auth_if_no_ano
def feed_letter_category(book_id):
if not current_user.check_visibility(constants.SIDEBAR_CATEGORY):
abort(404)
off = request.args.get("offset") or 0
letter = true() if book_id == "00" else func.upper(db.Tags.name).startswith(book_id)
entries = calibre_db.session.query(db.Tags)\
@ -235,22 +306,40 @@ def feed_letter_category(book_id):
@opds.route("/opds/category/<int:book_id>")
@requires_basic_auth_if_no_ano
def feed_category(book_id):
return render_xml_dataset(db.Tags, book_id)
off = request.args.get("offset") or 0
entries, __, pagination = calibre_db.fill_indexpage((int(off) / (int(config.config_books_per_page)) + 1), 0,
db.Books,
db.Books.tags.any(db.Tags.id == book_id),
[db.Books.timestamp.desc()])
return render_xml_template('feed.xml', entries=entries, pagination=pagination)
@opds.route("/opds/series")
@requires_basic_auth_if_no_ano
def feed_seriesindex():
if not current_user.check_visibility(constants.SIDEBAR_SERIES):
abort(404)
return render_element_index(db.Series.sort, db.books_series_link, 'opds.feed_letter_series')
shift = 0
off = int(request.args.get("offset") or 0)
entries = calibre_db.session.query(func.upper(func.substr(db.Series.sort, 1, 1)).label('id'))\
.join(db.books_series_link).join(db.Books).filter(calibre_db.common_filters())\
.group_by(func.upper(func.substr(db.Series.sort, 1, 1))).all()
elements = []
if off == 0:
elements.append({'id': "00", 'name':_("All")})
shift = 1
for entry in entries[
off + shift - 1:
int(off + int(config.config_books_per_page) - shift)]:
elements.append({'id': entry.id, 'name': entry.id})
pagination = Pagination((int(off) / (int(config.config_books_per_page)) + 1), config.config_books_per_page,
len(entries) + 1)
return render_xml_template('feed.xml',
letterelements=elements,
folder='opds.feed_letter_series',
pagination=pagination)
@opds.route("/opds/series/letter/<book_id>")
@requires_basic_auth_if_no_ano
def feed_letter_series(book_id):
if not current_user.check_visibility(constants.SIDEBAR_SERIES):
abort(404)
off = request.args.get("offset") or 0
letter = true() if book_id == "00" else func.upper(db.Series.sort).startswith(book_id)
entries = calibre_db.session.query(db.Series)\
@ -272,19 +361,16 @@ def feed_series(book_id):
entries, __, pagination = calibre_db.fill_indexpage((int(off) / (int(config.config_books_per_page)) + 1), 0,
db.Books,
db.Books.series.any(db.Series.id == book_id),
[db.Books.series_index],
True, config.config_read_column)
[db.Books.series_index])
return render_xml_template('feed.xml', entries=entries, pagination=pagination)
@opds.route("/opds/ratings")
@requires_basic_auth_if_no_ano
def feed_ratingindex():
if not current_user.check_visibility(constants.SIDEBAR_RATING):
abort(404)
off = request.args.get("offset") or 0
entries = calibre_db.session.query(db.Ratings, func.count('books_ratings_link.book').label('count'),
(db.Ratings.rating / 2).label('name')) \
(db.Ratings.rating / 2).label('name')) \
.join(db.books_ratings_link)\
.join(db.Books)\
.filter(calibre_db.common_filters()) \
@ -302,14 +388,17 @@ def feed_ratingindex():
@opds.route("/opds/ratings/<book_id>")
@requires_basic_auth_if_no_ano
def feed_ratings(book_id):
return render_xml_dataset(db.Ratings, book_id)
off = request.args.get("offset") or 0
entries, __, pagination = calibre_db.fill_indexpage((int(off) / (int(config.config_books_per_page)) + 1), 0,
db.Books,
db.Books.ratings.any(db.Ratings.id == book_id),
[db.Books.timestamp.desc()])
return render_xml_template('feed.xml', entries=entries, pagination=pagination)
@opds.route("/opds/formats")
@requires_basic_auth_if_no_ano
def feed_formatindex():
if not current_user.check_visibility(constants.SIDEBAR_FORMAT):
abort(404)
off = request.args.get("offset") or 0
entries = calibre_db.session.query(db.Data).join(db.Books)\
.filter(calibre_db.common_filters()) \
@ -317,6 +406,7 @@ def feed_formatindex():
.order_by(db.Data.format).all()
pagination = Pagination((int(off) / (int(config.config_books_per_page)) + 1), config.config_books_per_page,
len(entries))
element = list()
for entry in entries:
element.append(FeedObject(entry.format, entry.format))
@ -330,8 +420,7 @@ def feed_format(book_id):
entries, __, pagination = calibre_db.fill_indexpage((int(off) / (int(config.config_books_per_page)) + 1), 0,
db.Books,
db.Books.data.any(db.Data.format == book_id.upper()),
[db.Books.timestamp.desc()],
True, config.config_read_column)
[db.Books.timestamp.desc()])
return render_xml_template('feed.xml', entries=entries, pagination=pagination)
@ -339,10 +428,8 @@ def feed_format(book_id):
@opds.route("/opds/language/")
@requires_basic_auth_if_no_ano
def feed_languagesindex():
if not current_user.check_visibility(constants.SIDEBAR_LANGUAGE):
abort(404)
off = request.args.get("offset") or 0
if current_user.filter_language() == "all":
if current_user.filter_language() == u"all":
languages = calibre_db.speaking_language()
else:
languages = calibre_db.session.query(db.Languages).filter(
@ -360,19 +447,15 @@ def feed_languages(book_id):
entries, __, pagination = calibre_db.fill_indexpage((int(off) / (int(config.config_books_per_page)) + 1), 0,
db.Books,
db.Books.languages.any(db.Languages.id == book_id),
[db.Books.timestamp.desc()],
True, config.config_read_column)
[db.Books.timestamp.desc()])
return render_xml_template('feed.xml', entries=entries, pagination=pagination)
@opds.route("/opds/shelfindex")
@requires_basic_auth_if_no_ano
def feed_shelfindex():
if not (current_user.is_authenticated or g.allow_anonymous):
abort(404)
off = request.args.get("offset") or 0
shelf = ub.session.query(ub.Shelf).filter(
or_(ub.Shelf.is_public == 1, ub.Shelf.user_id == current_user.id)).order_by(ub.Shelf.name).all()
shelf = g.shelves_access
number = len(shelf)
pagination = Pagination((int(off) / (int(config.config_books_per_page)) + 1), config.config_books_per_page,
number)
@ -382,8 +465,6 @@ def feed_shelfindex():
@opds.route("/opds/shelf/<int:book_id>")
@requires_basic_auth_if_no_ano
def feed_shelf(book_id):
if not (current_user.is_authenticated or g.allow_anonymous):
abort(404)
off = request.args.get("offset") or 0
if current_user.is_anonymous:
shelf = ub.session.query(ub.Shelf).filter(ub.Shelf.is_public == 1,
@ -396,32 +477,24 @@ def feed_shelf(book_id):
result = list()
# user is allowed to access shelf
if shelf:
result, __, pagination = calibre_db.fill_indexpage((int(off) / (int(config.config_books_per_page)) + 1),
config.config_books_per_page,
db.Books,
ub.BookShelf.shelf == shelf.id,
[ub.BookShelf.order.asc()],
True, config.config_read_column,
ub.BookShelf, ub.BookShelf.book_id == db.Books.id)
# delete shelf entries where book is not existent anymore, can happen if book is deleted outside calibre-web
wrong_entries = calibre_db.session.query(ub.BookShelf) \
.join(db.Books, ub.BookShelf.book_id == db.Books.id, isouter=True) \
.filter(db.Books.id == None).all()
for entry in wrong_entries:
log.info('Not existing book {} in {} deleted'.format(entry.book_id, shelf))
try:
ub.session.query(ub.BookShelf).filter(ub.BookShelf.book_id == entry.book_id).delete()
ub.session.commit()
except (OperationalError, InvalidRequestError) as e:
ub.session.rollback()
log.error_or_exception("Settings Database error: {}".format(e))
books_in_shelf = ub.session.query(ub.BookShelf).filter(ub.BookShelf.shelf == book_id).order_by(
ub.BookShelf.order.asc()).all()
for book in books_in_shelf:
cur_book = calibre_db.get_book(book.book_id)
result.append(cur_book)
pagination = Pagination((int(off) / (int(config.config_books_per_page)) + 1), config.config_books_per_page,
len(result))
return render_xml_template('feed.xml', entries=result, pagination=pagination)
@opds.route("/opds/download/<book_id>/<book_format>/")
@requires_basic_auth_if_no_ano
def opds_download_link(book_id, book_format):
if not current_user.role_download():
# I gave up with this: With enabled ldap login, the user doesn't get logged in, therefore it's always guest
# workaround, loading the user from the request and checking it's download rights here
# in case of anonymous browsing user is None
user = load_user_from_request(request) or current_user
if not user.role_download():
return abort(403)
if "Kobo" in request.headers.get('User-Agent'):
client = "kobo"
@ -444,15 +517,46 @@ def get_metadata_calibre_companion(uuid, library):
return ""
@opds.route("/opds/stats")
@requires_basic_auth_if_no_ano
def get_database_stats():
stat = dict()
stat['books'] = calibre_db.session.query(db.Books).count()
stat['authors'] = calibre_db.session.query(db.Authors).count()
stat['categories'] = calibre_db.session.query(db.Tags).count()
stat['series'] = calibre_db.session.query(db.Series).count()
return Response(json.dumps(stat), mimetype="application/json")
def feed_search(term):
if term:
entries, __, ___ = calibre_db.get_search_results(term, config_read_column=config.config_read_column)
entries_count = len(entries) if len(entries) > 0 else 1
pagination = Pagination(1, entries_count, entries_count)
items = [entry[0] for entry in entries]
return render_xml_template('feed.xml', searchterm=term, entries=items, pagination=pagination)
else:
return render_xml_template('feed.xml', searchterm="")
def check_auth(username, password):
try:
username = username.encode('windows-1252')
except UnicodeEncodeError:
username = username.encode('utf-8')
user = ub.session.query(ub.User).filter(func.lower(ub.User.name) ==
username.decode('utf-8').lower()).first()
if bool(user and check_password_hash(str(user.password), password)):
return True
else:
ip_Address = request.headers.get('X-Forwarded-For', request.remote_addr)
log.warning('OPDS Login failed for user "%s" IP-address: %s', username.decode('utf-8'), ip_Address)
return False
def authenticate():
return Response(
'Could not verify your access level for that URL.\n'
'You have to login with proper credentials', 401,
{'WWW-Authenticate': 'Basic realm="Login Required"'})
def render_xml_template(*args, **kwargs):
# ToDo: return time in current timezone similar to %z
currtime = datetime.datetime.now().strftime("%Y-%m-%dT%H:%M:%S+00:00")
xml = render_template(current_time=currtime, instance=config.config_calibre_web_title, *args, **kwargs)
response = make_response(xml)
response.headers["Content-Type"] = "application/atom+xml; charset=utf-8"
return response
@opds.route("/opds/thumb_240_240/<book_id>")
@ -467,8 +571,6 @@ def feed_get_cover(book_id):
@opds.route("/opds/readbooks")
@requires_basic_auth_if_no_ano
def feed_read_books():
if not (current_user.check_visibility(constants.SIDEBAR_READ_AND_UNREAD) and not current_user.is_anonymous):
return abort(403)
off = request.args.get("offset") or 0
result, pagination = render_read_books(int(off) / (int(config.config_books_per_page)) + 1, True, True)
return render_xml_template('feed.xml', entries=result, pagination=pagination)
@ -477,76 +579,6 @@ def feed_read_books():
@opds.route("/opds/unreadbooks")
@requires_basic_auth_if_no_ano
def feed_unread_books():
if not (current_user.check_visibility(constants.SIDEBAR_READ_AND_UNREAD) and not current_user.is_anonymous):
return abort(403)
off = request.args.get("offset") or 0
result, pagination = render_read_books(int(off) / (int(config.config_books_per_page)) + 1, False, True)
return render_xml_template('feed.xml', entries=result, pagination=pagination)
class FeedObject:
def __init__(self, rating_id, rating_name):
self.rating_id = rating_id
self.rating_name = rating_name
@property
def id(self):
return self.rating_id
@property
def name(self):
return self.rating_name
def feed_search(term):
if term:
entries, __, ___ = calibre_db.get_search_results(term, config=config)
entries_count = len(entries) if len(entries) > 0 else 1
pagination = Pagination(1, entries_count, entries_count)
return render_xml_template('feed.xml', searchterm=term, entries=entries, pagination=pagination)
else:
return render_xml_template('feed.xml', searchterm="")
def render_xml_template(*args, **kwargs):
# ToDo: return time in current timezone similar to %z
currtime = datetime.datetime.now().strftime("%Y-%m-%dT%H:%M:%S+00:00")
xml = render_template(current_time=currtime, instance=config.config_calibre_web_title, constants=constants.sidebar_settings, *args, **kwargs)
response = make_response(xml)
response.headers["Content-Type"] = "application/atom+xml; charset=utf-8"
return response
def render_xml_dataset(data_table, book_id):
off = request.args.get("offset") or 0
entries, __, pagination = calibre_db.fill_indexpage((int(off) / (int(config.config_books_per_page)) + 1), 0,
db.Books,
getattr(db.Books, data_table.__tablename__).any(data_table.id == book_id),
[db.Books.timestamp.desc()],
True, config.config_read_column)
return render_xml_template('feed.xml', entries=entries, pagination=pagination)
def render_element_index(database_column, linked_table, folder):
shift = 0
off = int(request.args.get("offset") or 0)
entries = calibre_db.session.query(func.upper(func.substr(database_column, 1, 1)).label('id'), None, None)
# query = calibre_db.generate_linked_query(config.config_read_column, db.Books)
if linked_table is not None:
entries = entries.join(linked_table).join(db.Books)
entries = entries.filter(calibre_db.common_filters()).group_by(func.upper(func.substr(database_column, 1, 1))).all()
elements = []
if off == 0 and entries:
elements.append({'id': "00", 'name': _("All")})
shift = 1
for entry in entries[
off + shift - 1:
int(off + int(config.config_books_per_page) - shift)]:
elements.append({'id': entry.id, 'name': entry.id})
pagination = Pagination((int(off) / (int(config.config_books_per_page)) + 1), config.config_books_per_page,
len(entries) + 1)
return render_xml_template('feed.xml',
letterelements=elements,
folder=folder,
pagination=pagination)

@ -57,10 +57,10 @@ class Pagination(object):
def has_next(self):
return self.page < self.pages
# right_edge: last right_edges count of all pages are shown as number, means, if 10 pages are paginated -> 9,10 shown
# left_edge: first left_edges count of all pages are shown as number -> 1,2 shown
# left_current: left_current count below current page are shown as number, means if current page 5 -> 3,4 shown
# left_current: right_current count above current page are shown as number, means if current page 5 -> 6,7 shown
# right_edge: last right_edges count of all pages are shown as number, means, if 10 pages are paginated -> 9,10 shwn
# left_edge: first left_edges count of all pages are shown as number -> 1,2 shwn
# left_current: left_current count below current page are shown as number, means if current page 5 -> 3,4 shwn
# left_current: right_current count above current page are shown as number, means if current page 5 -> 6,7 shwn
def iter_pages(self, left_edge=2, left_current=2,
right_current=4, right_edge=2):
last = 0

@ -25,11 +25,12 @@
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING
# IN ANY WAY OUT OF THE USE OF THIS SOFTWARE AND DOCUMENTATION, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
# https://web.archive.org/web/20120517003641/http://flask.pocoo.org/snippets/62/
# http://flask.pocoo.org/snippets/62/
from urllib.parse import urlparse, urljoin
from flask import request, url_for, redirect, current_app
from flask import request, url_for, redirect
def is_safe_url(target):
@ -38,15 +39,16 @@ def is_safe_url(target):
return test_url.scheme in ('http', 'https') and ref_url.netloc == test_url.netloc
def remove_prefix(text, prefix):
if text.startswith(prefix):
return text[len(prefix):]
return ""
def get_redirect_target():
for target in request.values.get('next'), request.referrer:
if not target:
continue
if is_safe_url(target):
return target
def get_redirect_location(next, endpoint, **values):
target = next or url_for(endpoint, **values)
adapter = current_app.url_map.bind(urlparse(request.host_url).netloc)
if not len(adapter.allowed_methods(remove_prefix(target, request.environ.get('HTTP_X_SCRIPT_NAME',"")))):
def redirect_back(endpoint, **values):
target = request.form['next']
if not target or not is_safe_url(target):
target = url_for(endpoint, **values)
return target
return redirect(target)

@ -22,7 +22,6 @@
import json
from datetime import datetime
from functools import wraps
from flask import Blueprint, request, make_response, abort, url_for, flash, redirect
from flask_login import login_required, current_user, login_user
@ -32,6 +31,10 @@ from sqlalchemy.sql.expression import true
from . import config, logger, ub
from .render_template import render_title_template
try:
from functools import wraps
except ImportError:
pass # We're not using Python 3
remotelogin = Blueprint('remotelogin', __name__)
log = logger.create()
@ -58,8 +61,8 @@ def remote_login():
ub.session.add(auth_token)
ub.session_commit()
verify_url = url_for('remotelogin.verify_token', token=auth_token.auth_token, _external=true)
log.debug("Remot Login request with token: %s", auth_token.auth_token)
return render_title_template('remote_login.html', title=_("Login"), token=auth_token.auth_token,
log.debug(u"Remot Login request with token: %s", auth_token.auth_token)
return render_title_template('remote_login.html', title=_(u"Login"), token=auth_token.auth_token,
verify_url=verify_url, page="remotelogin")
@ -71,8 +74,8 @@ def verify_token(token):
# Token not found
if auth_token is None:
flash(_("Token not found"), category="error")
log.error("Remote Login token not found")
flash(_(u"Token not found"), category="error")
log.error(u"Remote Login token not found")
return redirect(url_for('web.index'))
# Token expired
@ -80,8 +83,8 @@ def verify_token(token):
ub.session.delete(auth_token)
ub.session_commit()
flash(_("Token has expired"), category="error")
log.error("Remote Login token expired")
flash(_(u"Token has expired"), category="error")
log.error(u"Remote Login token expired")
return redirect(url_for('web.index'))
# Update token with user information
@ -89,8 +92,8 @@ def verify_token(token):
auth_token.verified = True
ub.session_commit()
flash(_("Success! Please return to your device"), category="success")
log.debug("Remote Login token for userid %s verified", auth_token.user_id)
flash(_(u"Success! Please return to your device"), category="success")
log.debug(u"Remote Login token for userid %s verified", auth_token.user_id)
return redirect(url_for('web.index'))
@ -105,7 +108,7 @@ def token_verified():
# Token not found
if auth_token is None:
data['status'] = 'error'
data['message'] = _("Token not found")
data['message'] = _(u"Token not found")
# Token expired
elif datetime.now() > auth_token.expiration:
@ -113,7 +116,7 @@ def token_verified():
ub.session_commit()
data['status'] = 'error'
data['message'] = _("Token has expired")
data['message'] = _(u"Token has expired")
elif not auth_token.verified:
data['status'] = 'not_verified'
@ -126,8 +129,8 @@ def token_verified():
ub.session_commit("User {} logged in via remotelogin, token deleted".format(user.name))
data['status'] = 'success'
log.debug("Remote Login for userid %s succeeded", user.id)
flash(_("Success! You are now logged in as: %(nickname)s", nickname=user.name), category="success")
log.debug(u"Remote Login for userid %s succeded", user.id)
flash(_(u"you are now logged in as: '%(nickname)s'", nickname=user.name), category="success")
response = make_response(json.dumps(data, ensure_ascii=False))
response.headers["Content-Type"] = "application/json; charset=utf-8"

@ -16,23 +16,20 @@
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
from flask import render_template, g, abort, request
from flask import render_template
from flask_babel import gettext as _
from flask import g
from werkzeug.local import LocalProxy
from flask_login import current_user
from sqlalchemy.sql.expression import or_
from . import config, constants, logger, ub
from . import config, constants, ub, logger, db, calibre_db
from .ub import User
log = logger.create()
def get_sidebar_config(kwargs=None):
kwargs = kwargs or []
simple = bool([e for e in ['kindle', 'tolino', "kobo", "bookeen"]
if (e in request.headers.get('User-Agent', "").lower())])
if 'content' in kwargs:
content = kwargs['content']
content = isinstance(content, (User, LocalProxy)) and not content.role_anonymous()
@ -47,12 +44,12 @@ def get_sidebar_config(kwargs=None):
"show_text": _('Show Hot Books'), "config_show": True})
if current_user.role_admin():
sidebar.append({"glyph": "glyphicon-download", "text": _('Downloaded Books'), "link": 'web.download_list',
"id": "download", "visibility": constants.SIDEBAR_DOWNLOAD, 'public': (not current_user.is_anonymous),
"id": "download", "visibility": constants.SIDEBAR_DOWNLOAD, 'public': (not g.user.is_anonymous),
"page": "download", "show_text": _('Show Downloaded Books'),
"config_show": content})
else:
sidebar.append({"glyph": "glyphicon-download", "text": _('Downloaded Books'), "link": 'web.books_list',
"id": "download", "visibility": constants.SIDEBAR_DOWNLOAD, 'public': (not current_user.is_anonymous),
"id": "download", "visibility": constants.SIDEBAR_DOWNLOAD, 'public': (not g.user.is_anonymous),
"page": "download", "show_text": _('Show Downloaded Books'),
"config_show": content})
sidebar.append(
@ -60,60 +57,66 @@ def get_sidebar_config(kwargs=None):
"visibility": constants.SIDEBAR_BEST_RATED, 'public': True, "page": "rated",
"show_text": _('Show Top Rated Books'), "config_show": True})
sidebar.append({"glyph": "glyphicon-eye-open", "text": _('Read Books'), "link": 'web.books_list', "id": "read",
"visibility": constants.SIDEBAR_READ_AND_UNREAD, 'public': (not current_user.is_anonymous),
"page": "read", "show_text": _('Show Read and Unread'), "config_show": content})
"visibility": constants.SIDEBAR_READ_AND_UNREAD, 'public': (not g.user.is_anonymous),
"page": "read", "show_text": _('Show read and unread'), "config_show": content})
sidebar.append(
{"glyph": "glyphicon-eye-close", "text": _('Unread Books'), "link": 'web.books_list', "id": "unread",
"visibility": constants.SIDEBAR_READ_AND_UNREAD, 'public': (not current_user.is_anonymous), "page": "unread",
"visibility": constants.SIDEBAR_READ_AND_UNREAD, 'public': (not g.user.is_anonymous), "page": "unread",
"show_text": _('Show unread'), "config_show": False})
sidebar.append({"glyph": "glyphicon-random", "text": _('Discover'), "link": 'web.books_list', "id": "rand",
"visibility": constants.SIDEBAR_RANDOM, 'public': True, "page": "discover",
"show_text": _('Show Random Books'), "config_show": True})
sidebar.append({"glyph": "glyphicon-inbox", "text": _('Categories'), "link": 'web.category_list', "id": "cat",
"visibility": constants.SIDEBAR_CATEGORY, 'public': True, "page": "category",
"show_text": _('Show Category Section'), "config_show": True})
"show_text": _('Show category selection'), "config_show": True})
sidebar.append({"glyph": "glyphicon-bookmark", "text": _('Series'), "link": 'web.series_list', "id": "serie",
"visibility": constants.SIDEBAR_SERIES, 'public': True, "page": "series",
"show_text": _('Show Series Section'), "config_show": True})
"show_text": _('Show series selection'), "config_show": True})
sidebar.append({"glyph": "glyphicon-user", "text": _('Authors'), "link": 'web.author_list', "id": "author",
"visibility": constants.SIDEBAR_AUTHOR, 'public': True, "page": "author",
"show_text": _('Show Author Section'), "config_show": True})
"show_text": _('Show author selection'), "config_show": True})
sidebar.append(
{"glyph": "glyphicon-text-size", "text": _('Publishers'), "link": 'web.publisher_list', "id": "publisher",
"visibility": constants.SIDEBAR_PUBLISHER, 'public': True, "page": "publisher",
"show_text": _('Show Publisher Section'), "config_show":True})
"show_text": _('Show publisher selection'), "config_show":True})
sidebar.append({"glyph": "glyphicon-flag", "text": _('Languages'), "link": 'web.language_overview', "id": "lang",
"visibility": constants.SIDEBAR_LANGUAGE, 'public': (current_user.filter_language() == 'all'),
"visibility": constants.SIDEBAR_LANGUAGE, 'public': (g.user.filter_language() == 'all'),
"page": "language",
"show_text": _('Show Language Section'), "config_show": True})
"show_text": _('Show language selection'), "config_show": True})
sidebar.append({"glyph": "glyphicon-star-empty", "text": _('Ratings'), "link": 'web.ratings_list', "id": "rate",
"visibility": constants.SIDEBAR_RATING, 'public': True,
"page": "rating", "show_text": _('Show Ratings Section'), "config_show": True})
"page": "rating", "show_text": _('Show ratings selection'), "config_show": True})
sidebar.append({"glyph": "glyphicon-file", "text": _('File formats'), "link": 'web.formats_list', "id": "format",
"visibility": constants.SIDEBAR_FORMAT, 'public': True,
"page": "format", "show_text": _('Show File Formats Section'), "config_show": True})
"page": "format", "show_text": _('Show file formats selection'), "config_show": True})
sidebar.append(
{"glyph": "glyphicon-trash", "text": _('Archived Books'), "link": 'web.books_list', "id": "archived",
"visibility": constants.SIDEBAR_ARCHIVED, 'public': (not current_user.is_anonymous), "page": "archived",
"show_text": _('Show Archived Books'), "config_show": content})
if not simple:
sidebar.append(
{"glyph": "glyphicon-th-list", "text": _('Books List'), "link": 'web.books_table', "id": "list",
"visibility": constants.SIDEBAR_LIST, 'public': (not current_user.is_anonymous), "page": "list",
"show_text": _('Show Books List'), "config_show": content})
g.shelves_access = ub.session.query(ub.Shelf).filter(
or_(ub.Shelf.is_public == 1, ub.Shelf.user_id == current_user.id)).order_by(ub.Shelf.name).all()
"visibility": constants.SIDEBAR_ARCHIVED, 'public': (not g.user.is_anonymous), "page": "archived",
"show_text": _('Show archived books'), "config_show": content})
sidebar.append(
{"glyph": "glyphicon-th-list", "text": _('Books List'), "link": 'web.books_table', "id": "list",
"visibility": constants.SIDEBAR_LIST, 'public': (not g.user.is_anonymous), "page": "list",
"show_text": _('Show Books List'), "config_show": content})
return sidebar, simple
return sidebar
def get_readbooks_ids():
if not config.config_read_column:
readBooks = ub.session.query(ub.ReadBook).filter(ub.ReadBook.user_id == int(current_user.id))\
.filter(ub.ReadBook.read_status == ub.ReadBook.STATUS_FINISHED).all()
return frozenset([x.book_id for x in readBooks])
else:
try:
readBooks = calibre_db.session.query(db.cc_classes[config.config_read_column])\
.filter(db.cc_classes[config.config_read_column].value == True).all()
return frozenset([x.book for x in readBooks])
except (KeyError, AttributeError):
log.error("Custom Column No.%d is not existing in calibre database", config.config_read_column)
return []
# Returns the template for rendering and includes the instance name
def render_title_template(*args, **kwargs):
sidebar, simple = get_sidebar_config(kwargs)
try:
return render_template(instance=config.config_calibre_web_title, sidebar=sidebar, simple=simple,
accept=constants.EXTENSIONS_UPLOAD,
*args, **kwargs)
except PermissionError:
log.error("No permission to access {} file.".format(args[0]))
abort(403)
sidebar = get_sidebar_config(kwargs)
return render_template(instance=config.config_calibre_web_title, sidebar=sidebar,
accept=constants.EXTENSIONS_UPLOAD, read_book_ids=get_readbooks_ids(),
*args, **kwargs)

@ -1,110 +0,0 @@
# -*- coding: utf-8 -*-
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
# Copyright (C) 2020 mmonkey
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import datetime
from . import config, constants
from .services.background_scheduler import BackgroundScheduler, CronTrigger, use_APScheduler
from .tasks.database import TaskReconnectDatabase
from .tasks.tempFolder import TaskDeleteTempFolder
from .tasks.thumbnail import TaskGenerateCoverThumbnails, TaskGenerateSeriesThumbnails, TaskClearCoverThumbnailCache
from .services.worker import WorkerThread
from .tasks.metadata_backup import TaskBackupMetadata
def get_scheduled_tasks(reconnect=True):
tasks = list()
# Reconnect Calibre database (metadata.db) based on config.schedule_reconnect
if reconnect:
tasks.append([lambda: TaskReconnectDatabase(), 'reconnect', False])
# Delete temp folder
tasks.append([lambda: TaskDeleteTempFolder(), 'delete temp', True])
# Generate metadata.opf file for each changed book
if config.schedule_metadata_backup:
tasks.append([lambda: TaskBackupMetadata("en"), 'backup metadata', False])
# Generate all missing book cover thumbnails
if config.schedule_generate_book_covers:
tasks.append([lambda: TaskClearCoverThumbnailCache(0), 'delete superfluous book covers', True])
tasks.append([lambda: TaskGenerateCoverThumbnails(), 'generate book covers', False])
# Generate all missing series thumbnails
if config.schedule_generate_series_covers:
tasks.append([lambda: TaskGenerateSeriesThumbnails(), 'generate book covers', False])
return tasks
def end_scheduled_tasks():
worker = WorkerThread.get_instance()
for __, __, __, task, __ in worker.tasks:
if task.scheduled and task.is_cancellable:
worker.end_task(task.id)
def register_scheduled_tasks(reconnect=True):
scheduler = BackgroundScheduler()
if scheduler:
# Remove all existing jobs
scheduler.remove_all_jobs()
start = config.schedule_start_time
duration = config.schedule_duration
# Register scheduled tasks
timezone_info = datetime.datetime.now(datetime.timezone.utc).astimezone().tzinfo
scheduler.schedule_tasks(tasks=get_scheduled_tasks(reconnect), trigger=CronTrigger(hour=start,
timezone=timezone_info))
end_time = calclulate_end_time(start, duration)
scheduler.schedule(func=end_scheduled_tasks, trigger=CronTrigger(hour=end_time.hour, minute=end_time.minute,
timezone=timezone_info),
name="end scheduled task")
# Kick-off tasks, if they should currently be running
if should_task_be_running(start, duration):
scheduler.schedule_tasks_immediately(tasks=get_scheduled_tasks(reconnect))
def register_startup_tasks():
scheduler = BackgroundScheduler()
if scheduler:
start = config.schedule_start_time
duration = config.schedule_duration
# Run scheduled tasks immediately for development and testing
# Ignore tasks that should currently be running, as these will be added when registering scheduled tasks
if constants.APP_MODE in ['development', 'test'] and not should_task_be_running(start, duration):
scheduler.schedule_tasks_immediately(tasks=get_scheduled_tasks(False))
else:
scheduler.schedule_tasks_immediately(tasks=[[lambda: TaskDeleteTempFolder(), 'delete temp', True]])
def should_task_be_running(start, duration):
now = datetime.datetime.now()
start_time = datetime.datetime.now().replace(hour=start, minute=0, second=0, microsecond=0)
end_time = start_time + datetime.timedelta(hours=duration // 60, minutes=duration % 60)
return start_time < now < end_time
def calclulate_end_time(start, duration):
start_time = datetime.datetime.now().replace(hour=start, minute=0)
return start_time + datetime.timedelta(hours=duration // 60, minutes=duration % 60)

@ -1,403 +0,0 @@
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
# Copyright (C) 2022 OzzieIsaacs
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import json
from datetime import datetime
from flask import Blueprint, request, redirect, url_for, flash
from flask import session as flask_session
from flask_login import current_user
from flask_babel import format_date
from flask_babel import gettext as _
from sqlalchemy.sql.expression import func, not_, and_, or_, text, true
from sqlalchemy.sql.functions import coalesce
from . import logger, db, calibre_db, config, ub
from .usermanagement import login_required_if_no_ano
from .render_template import render_title_template
from .pagination import Pagination
search = Blueprint('search', __name__)
log = logger.create()
@search.route("/search", methods=["GET"])
@login_required_if_no_ano
def simple_search():
term = request.args.get("query")
if term:
return redirect(url_for('web.books_list', data="search", sort_param='stored', query=term.strip()))
else:
return render_title_template('search.html',
searchterm="",
result_count=0,
title=_("Search"),
page="search")
@search.route("/advsearch", methods=['POST'])
@login_required_if_no_ano
def advanced_search():
values = dict(request.form)
params = ['include_tag', 'exclude_tag', 'include_serie', 'exclude_serie', 'include_shelf', 'exclude_shelf',
'include_language', 'exclude_language', 'include_extension', 'exclude_extension']
for param in params:
values[param] = list(request.form.getlist(param))
flask_session['query'] = json.dumps(values)
return redirect(url_for('web.books_list', data="advsearch", sort_param='stored', query=""))
@search.route("/advsearch", methods=['GET'])
@login_required_if_no_ano
def advanced_search_form():
# Build custom columns names
cc = calibre_db.get_cc_columns(config, filter_config_custom_read=True)
return render_prepare_search_form(cc)
def adv_search_custom_columns(cc, term, q):
for c in cc:
if c.datatype == "datetime":
custom_start = term.get('custom_column_' + str(c.id) + '_start')
custom_end = term.get('custom_column_' + str(c.id) + '_end')
if custom_start:
q = q.filter(getattr(db.Books, 'custom_column_' + str(c.id)).any(
func.datetime(db.cc_classes[c.id].value) >= func.datetime(custom_start)))
if custom_end:
q = q.filter(getattr(db.Books, 'custom_column_' + str(c.id)).any(
func.datetime(db.cc_classes[c.id].value) <= func.datetime(custom_end)))
else:
custom_query = term.get('custom_column_' + str(c.id))
if custom_query != '' and custom_query is not None:
if c.datatype == 'bool':
q = q.filter(getattr(db.Books, 'custom_column_' + str(c.id)).any(
db.cc_classes[c.id].value == (custom_query == "True")))
elif c.datatype == 'int' or c.datatype == 'float':
q = q.filter(getattr(db.Books, 'custom_column_' + str(c.id)).any(
db.cc_classes[c.id].value == custom_query))
elif c.datatype == 'rating':
q = q.filter(getattr(db.Books, 'custom_column_' + str(c.id)).any(
db.cc_classes[c.id].value == int(float(custom_query) * 2)))
else:
q = q.filter(getattr(db.Books, 'custom_column_' + str(c.id)).any(
func.lower(db.cc_classes[c.id].value).ilike("%" + custom_query + "%")))
return q
def adv_search_language(q, include_languages_inputs, exclude_languages_inputs):
if current_user.filter_language() != "all":
q = q.filter(db.Books.languages.any(db.Languages.lang_code == current_user.filter_language()))
else:
for language in include_languages_inputs:
q = q.filter(db.Books.languages.any(db.Languages.id == language))
for language in exclude_languages_inputs:
q = q.filter(not_(db.Books.series.any(db.Languages.id == language)))
return q
def adv_search_ratings(q, rating_high, rating_low):
if rating_high:
rating_high = int(rating_high) * 2
q = q.filter(db.Books.ratings.any(db.Ratings.rating <= rating_high))
if rating_low:
rating_low = int(rating_low) * 2
q = q.filter(db.Books.ratings.any(db.Ratings.rating >= rating_low))
return q
def adv_search_read_status(read_status):
if not config.config_read_column:
if read_status == "True":
db_filter = and_(ub.ReadBook.user_id == int(current_user.id),
ub.ReadBook.read_status == ub.ReadBook.STATUS_FINISHED)
else:
db_filter = coalesce(ub.ReadBook.read_status, 0) != ub.ReadBook.STATUS_FINISHED
else:
try:
if read_status == "True":
db_filter = db.cc_classes[config.config_read_column].value == True
else:
db_filter = coalesce(db.cc_classes[config.config_read_column].value, False) != True
except (KeyError, AttributeError, IndexError):
log.error("Custom Column No.{} does not exist in calibre database".format(config.config_read_column))
flash(_("Custom Column No.%(column)d does not exist in calibre database",
column=config.config_read_column),
category="error")
return true()
return db_filter
def adv_search_extension(q, include_extension_inputs, exclude_extension_inputs):
for extension in include_extension_inputs:
q = q.filter(db.Books.data.any(db.Data.format == extension))
for extension in exclude_extension_inputs:
q = q.filter(not_(db.Books.data.any(db.Data.format == extension)))
return q
def adv_search_tag(q, include_tag_inputs, exclude_tag_inputs):
for tag in include_tag_inputs:
q = q.filter(db.Books.tags.any(db.Tags.id == tag))
for tag in exclude_tag_inputs:
q = q.filter(not_(db.Books.tags.any(db.Tags.id == tag)))
return q
def adv_search_serie(q, include_series_inputs, exclude_series_inputs):
for serie in include_series_inputs:
q = q.filter(db.Books.series.any(db.Series.id == serie))
for serie in exclude_series_inputs:
q = q.filter(not_(db.Books.series.any(db.Series.id == serie)))
return q
def adv_search_shelf(q, include_shelf_inputs, exclude_shelf_inputs):
q = q.outerjoin(ub.BookShelf, db.Books.id == ub.BookShelf.book_id)\
.filter(or_(ub.BookShelf.shelf == None, ub.BookShelf.shelf.notin_(exclude_shelf_inputs)))
if len(include_shelf_inputs) > 0:
q = q.filter(ub.BookShelf.shelf.in_(include_shelf_inputs))
return q
def extend_search_term(searchterm,
author_name,
book_title,
publisher,
pub_start,
pub_end,
tags,
rating_high,
rating_low,
read_status,
):
searchterm.extend((author_name.replace('|', ','), book_title, publisher))
if pub_start:
try:
searchterm.extend([_("Published after ") +
format_date(datetime.strptime(pub_start, "%Y-%m-%d"),
format='medium')])
except ValueError:
pub_start = ""
if pub_end:
try:
searchterm.extend([_("Published before ") +
format_date(datetime.strptime(pub_end, "%Y-%m-%d"),
format='medium')])
except ValueError:
pub_end = ""
elements = {'tag': db.Tags, 'serie':db.Series, 'shelf':ub.Shelf}
for key, db_element in elements.items():
tag_names = calibre_db.session.query(db_element).filter(db_element.id.in_(tags['include_' + key])).all()
searchterm.extend(tag.name for tag in tag_names)
tag_names = calibre_db.session.query(db_element).filter(db_element.id.in_(tags['exclude_' + key])).all()
searchterm.extend(tag.name for tag in tag_names)
language_names = calibre_db.session.query(db.Languages). \
filter(db.Languages.id.in_(tags['include_language'])).all()
if language_names:
language_names = calibre_db.speaking_language(language_names)
searchterm.extend(language.name for language in language_names)
language_names = calibre_db.session.query(db.Languages). \
filter(db.Languages.id.in_(tags['exclude_language'])).all()
if language_names:
language_names = calibre_db.speaking_language(language_names)
searchterm.extend(language.name for language in language_names)
if rating_high:
searchterm.extend([_("Rating <= %(rating)s", rating=rating_high)])
if rating_low:
searchterm.extend([_("Rating >= %(rating)s", rating=rating_low)])
if read_status != "Any":
searchterm.extend([_("Read Status = '%(status)s'", status=read_status)])
searchterm.extend(ext for ext in tags['include_extension'])
searchterm.extend(ext for ext in tags['exclude_extension'])
# handle custom columns
searchterm = " + ".join(filter(None, searchterm))
return searchterm, pub_start, pub_end
def render_adv_search_results(term, offset=None, order=None, limit=None):
sort = order[0] if order else [db.Books.sort]
pagination = None
cc = calibre_db.get_cc_columns(config, filter_config_custom_read=True)
calibre_db.session.connection().connection.connection.create_function("lower", 1, db.lcase)
query = calibre_db.generate_linked_query(config.config_read_column, db.Books)
q = query.outerjoin(db.books_series_link, db.Books.id == db.books_series_link.c.book)\
.outerjoin(db.Series)\
.filter(calibre_db.common_filters(True))
# parse multi selects to a complete dict
tags = dict()
elements = ['tag', 'serie', 'shelf', 'language', 'extension']
for element in elements:
tags['include_' + element] = term.get('include_' + element)
tags['exclude_' + element] = term.get('exclude_' + element)
author_name = term.get("author_name")
book_title = term.get("book_title")
publisher = term.get("publisher")
pub_start = term.get("publishstart")
pub_end = term.get("publishend")
rating_low = term.get("ratinghigh")
rating_high = term.get("ratinglow")
description = term.get("comment")
read_status = term.get("read_status")
if author_name:
author_name = author_name.strip().lower().replace(',', '|')
if book_title:
book_title = book_title.strip().lower()
if publisher:
publisher = publisher.strip().lower()
search_term = []
cc_present = False
for c in cc:
if c.datatype == "datetime":
column_start = term.get('custom_column_' + str(c.id) + '_start')
column_end = term.get('custom_column_' + str(c.id) + '_end')
if column_start:
search_term.extend(["{} >= {}".format(c.name,
format_date(datetime.strptime(column_start, "%Y-%m-%d").date(),
format='medium')
)])
cc_present = True
if column_end:
search_term.extend(["{} <= {}".format(c.name,
format_date(datetime.strptime(column_end, "%Y-%m-%d").date(),
format='medium')
)])
cc_present = True
elif term.get('custom_column_' + str(c.id)):
search_term.extend([("{}: {}".format(c.name, term.get('custom_column_' + str(c.id))))])
cc_present = True
if any(tags.values()) or author_name or book_title or publisher or pub_start or pub_end or rating_low \
or rating_high or description or cc_present or read_status != "Any":
search_term, pub_start, pub_end = extend_search_term(search_term,
author_name,
book_title,
publisher,
pub_start,
pub_end,
tags,
rating_high,
rating_low,
read_status)
if author_name:
q = q.filter(db.Books.authors.any(func.lower(db.Authors.name).ilike("%" + author_name + "%")))
if book_title:
q = q.filter(func.lower(db.Books.title).ilike("%" + book_title + "%"))
if pub_start:
q = q.filter(func.datetime(db.Books.pubdate) > func.datetime(pub_start))
if pub_end:
q = q.filter(func.datetime(db.Books.pubdate) < func.datetime(pub_end))
if read_status != "Any":
q = q.filter(adv_search_read_status(read_status))
if publisher:
q = q.filter(db.Books.publishers.any(func.lower(db.Publishers.name).ilike("%" + publisher + "%")))
q = adv_search_tag(q, tags['include_tag'], tags['exclude_tag'])
q = adv_search_serie(q, tags['include_serie'], tags['exclude_serie'])
q = adv_search_shelf(q, tags['include_shelf'], tags['exclude_shelf'])
q = adv_search_extension(q, tags['include_extension'], tags['exclude_extension'])
q = adv_search_language(q, tags['include_language'], tags['exclude_language'])
q = adv_search_ratings(q, rating_high, rating_low)
if description:
q = q.filter(db.Books.comments.any(func.lower(db.Comments.text).ilike("%" + description + "%")))
# search custom columns
try:
q = adv_search_custom_columns(cc, term, q)
except AttributeError as ex:
log.debug_or_exception(ex)
flash(_("Error on search for custom columns, please restart Calibre-Web"), category="error")
q = q.order_by(*sort).all()
flask_session['query'] = json.dumps(term)
ub.store_combo_ids(q)
result_count = len(q)
if offset is not None and limit is not None:
offset = int(offset)
limit_all = offset + int(limit)
pagination = Pagination((offset / (int(limit)) + 1), limit, result_count)
else:
offset = 0
limit_all = result_count
entries = calibre_db.order_authors(q[offset:limit_all], list_return=True, combined=True)
return render_title_template('search.html',
adv_searchterm=search_term,
pagination=pagination,
entries=entries,
result_count=result_count,
title=_("Advanced Search"), page="advsearch",
order=order[1])
def render_prepare_search_form(cc):
# prepare data for search-form
tags = calibre_db.session.query(db.Tags)\
.join(db.books_tags_link)\
.join(db.Books)\
.filter(calibre_db.common_filters()) \
.group_by(text('books_tags_link.tag'))\
.order_by(db.Tags.name).all()
series = calibre_db.session.query(db.Series)\
.join(db.books_series_link)\
.join(db.Books)\
.filter(calibre_db.common_filters()) \
.group_by(text('books_series_link.series'))\
.order_by(db.Series.name)\
.filter(calibre_db.common_filters()).all()
shelves = ub.session.query(ub.Shelf)\
.filter(or_(ub.Shelf.is_public == 1, ub.Shelf.user_id == int(current_user.id)))\
.order_by(ub.Shelf.name).all()
extensions = calibre_db.session.query(db.Data)\
.join(db.Books)\
.filter(calibre_db.common_filters()) \
.group_by(db.Data.format)\
.order_by(db.Data.format).all()
if current_user.filter_language() == "all":
languages = calibre_db.speaking_language()
else:
languages = None
return render_title_template('search_form.html', tags=tags, languages=languages, extensions=extensions,
series=series,shelves=shelves, title=_("Advanced Search"), cc=cc, page="advsearch")
def render_search_results(term, offset=None, order=None, limit=None):
if term:
join = db.books_series_link, db.Books.id == db.books_series_link.c.book, db.Series
entries, result_count, pagination = calibre_db.get_search_results(term,
config,
offset,
order,
limit,
*join)
else:
entries = list()
order = [None, None]
pagination = result_count = None
return render_title_template('search.html',
searchterm=term,
pagination=pagination,
query=term,
adv_searchterm=term,
entries=entries,
result_count=result_count,
title=_("Search"),
page="search",
order=order[1])

@ -22,16 +22,17 @@ import inspect
import json
import os
import sys
# from time import time
from dataclasses import asdict
from flask import Blueprint, Response, request, url_for
from flask_login import current_user
from flask_login import login_required
from flask_babel import get_locale
from sqlalchemy.exc import InvalidRequestError, OperationalError
from sqlalchemy.orm.attributes import flag_modified
from cps.services.Metadata import Metadata
from . import constants, logger, ub, web_server
from . import constants, get_locale, logger, ub
# current_milli_time = lambda: int(round(time() * 1000))
@ -39,14 +40,6 @@ meta = Blueprint("metadata", __name__)
log = logger.create()
try:
from dataclasses import asdict
except ImportError:
log.info('*** "dataclasses" is needed for calibre-web to run. Please install it using pip: "pip install dataclasses" ***')
print('*** "dataclasses" is needed for calibre-web to run. Please install it using pip: "pip install dataclasses" ***')
web_server.stop(True)
sys.exit(6)
new_list = list()
meta_dir = os.path.join(constants.BASE_DIR, "cps", "metadata_provider")
modules = os.listdir(os.path.join(constants.BASE_DIR, "cps", "metadata_provider"))
@ -56,10 +49,9 @@ for f in modules:
try:
importlib.import_module("cps.metadata_provider." + a)
new_list.append(a)
except (IndentationError, SyntaxError) as e:
log.error("Syntax error for metadata source: {} - {}".format(a, e))
except ImportError as e:
log.debug("Import error for metadata source: {} - {}".format(a, e))
log.error("Import error for metadata source: {} - {}".format(a, e))
pass
def list_classes(provider_list):
@ -138,6 +130,6 @@ def metadata_search():
if active.get(c.__id__, True)
}
for future in concurrent.futures.as_completed(meta):
data.extend([asdict(x) for x in future.result() if x])
data.extend([asdict(x) for x in future.result()])
# log.info({'Time elapsed {}'.format(current_milli_time()-start)})
return Response(json.dumps(data), mimetype="application/json")

@ -21,23 +21,20 @@ import os
import errno
import signal
import socket
import asyncio
import subprocess # nosec
try:
from gevent.pywsgi import WSGIServer
from .gevent_wsgi import MyWSGIHandler
from gevent.pool import Pool
from gevent.socket import socket as GeventSocket
from gevent import __version__ as _version
from greenlet import GreenletExit
import ssl
VERSION = 'Gevent ' + _version
_GEVENT = True
except ImportError:
from .tornado_wsgi import MyWSGIContainer
from tornado.wsgi import WSGIContainer
from tornado.httpserver import HTTPServer
from tornado.ioloop import IOLoop
from tornado import netutil
from tornado import version as _version
VERSION = 'Tornado ' + _version
_GEVENT = False
@ -97,12 +94,7 @@ class WebServer(object):
log.warning('Cert path: %s', certfile_path)
log.warning('Key path: %s', keyfile_path)
def _make_gevent_socket_activated(self):
# Reuse an already open socket on fd=SD_LISTEN_FDS_START
SD_LISTEN_FDS_START = 3
return GeventSocket(fileno=SD_LISTEN_FDS_START)
def _prepare_unix_socket(self, socket_file):
def _make_gevent_unix_socket(self, socket_file):
# the socket file must not exist prior to bind()
if os.path.exists(socket_file):
# avoid nuking regular files and symbolic links (could be a mistype or security issue)
@ -110,41 +102,35 @@ class WebServer(object):
raise OSError(errno.EEXIST, os.strerror(errno.EEXIST), socket_file)
os.remove(socket_file)
unix_sock = WSGIServer.get_listener(socket_file, family=socket.AF_UNIX)
self.unix_socket_file = socket_file
def _make_gevent_listener(self):
# ensure current user and group have r/w permissions, no permissions for other users
# this way the socket can be shared in a semi-secure manner
# between the user running calibre-web and the user running the fronting webserver
os.chmod(socket_file, 0o660)
return unix_sock
def _make_gevent_socket(self):
if os.name != 'nt':
socket_activated = os.environ.get("LISTEN_FDS")
if socket_activated:
sock = self._make_gevent_socket_activated()
sock_info = sock.getsockname()
return sock, "systemd-socket:" + _readable_listen_address(sock_info[0], sock_info[1])
unix_socket_file = os.environ.get("CALIBRE_UNIX_SOCKET")
if unix_socket_file:
self._prepare_unix_socket(unix_socket_file)
unix_sock = WSGIServer.get_listener(unix_socket_file, family=socket.AF_UNIX)
# ensure current user and group have r/w permissions, no permissions for other users
# this way the socket can be shared in a semi-secure manner
# between the user running calibre-web and the user running the fronting webserver
os.chmod(unix_socket_file, 0o660)
return unix_sock, "unix:" + unix_socket_file
return self._make_gevent_unix_socket(unix_socket_file), "unix:" + unix_socket_file
if self.listen_address:
return ((self.listen_address, self.listen_port),
_readable_listen_address(self.listen_address, self.listen_port))
return (self.listen_address, self.listen_port), None
if os.name == 'nt':
self.listen_address = '0.0.0.0'
return ((self.listen_address, self.listen_port),
_readable_listen_address(self.listen_address, self.listen_port))
return (self.listen_address, self.listen_port), None
try:
address = ('::', self.listen_port)
sock = WSGIServer.get_listener(address, family=socket.AF_INET6)
except socket.error as ex:
log.error('%s', ex)
log.warning('Unable to listen on {}, trying on IPv4 only...'.format(address))
log.warning('Unable to listen on "", trying on IPv4 only...')
address = ('', self.listen_port)
sock = WSGIServer.get_listener(address, family=socket.AF_INET)
@ -165,7 +151,7 @@ class WebServer(object):
# The value of __package__ indicates how Python was called. It may
# not exist if a setuptools script is installed as an egg. It may be
# set incorrectly for entry points created with pip on Windows.
if getattr(__main__, "__package__", "") in ["", None] or (
if getattr(__main__, "__package__", None) is None or (
os.name == "nt"
and __main__.__package__ == ""
and not os.path.exists(py_script)
@ -206,19 +192,17 @@ class WebServer(object):
rv.extend(("-m", py_module.lstrip(".")))
rv.extend(args)
if os.name == 'nt':
rv = ['"{}"'.format(a) for a in rv]
return rv
def _start_gevent(self):
ssl_args = self.ssl_args or {}
try:
sock, output = self._make_gevent_listener()
sock, output = self._make_gevent_socket()
if output is None:
output = _readable_listen_address(self.listen_address, self.listen_port)
log.info('Starting Gevent server on %s', output)
self.wsgiserver = WSGIServer(sock, self.app, log=self.access_logger, handler_class=MyWSGIHandler,
error_log=log,
spawn=Pool(), **ssl_args)
self.wsgiserver = WSGIServer(sock, self.app, log=self.access_logger, spawn=Pool(), **ssl_args)
if ssl_args:
wrap_socket = self.wsgiserver.wrap_socket
def my_wrap_socket(*args, **kwargs):
@ -239,42 +223,17 @@ class WebServer(object):
if os.name == 'nt' and sys.version_info > (3, 7):
import asyncio
asyncio.set_event_loop_policy(asyncio.WindowsSelectorEventLoopPolicy())
try:
# Max Buffersize set to 200MB
http_server = HTTPServer(MyWSGIContainer(self.app),
max_buffer_size=209700000,
ssl_options=self.ssl_args)
unix_socket_file = os.environ.get("CALIBRE_UNIX_SOCKET")
if os.environ.get("LISTEN_FDS") and os.name != 'nt':
SD_LISTEN_FDS_START = 3
sock = socket.socket(fileno=SD_LISTEN_FDS_START)
http_server.add_socket(sock)
sock.setblocking(0)
socket_name =sock.getsockname()
output = "systemd-socket:" + _readable_listen_address(socket_name[0], socket_name[1])
elif unix_socket_file and os.name != 'nt':
self._prepare_unix_socket(unix_socket_file)
output = "unix:" + unix_socket_file
unix_socket = netutil.bind_unix_socket(self.unix_socket_file)
http_server.add_socket(unix_socket)
# ensure current user and group have r/w permissions, no permissions for other users
# this way the socket can be shared in a semi-secure manner
# between the user running calibre-web and the user running the fronting webserver
os.chmod(self.unix_socket_file, 0o660)
else:
output = _readable_listen_address(self.listen_address, self.listen_port)
http_server.listen(self.listen_port, self.listen_address)
log.info('Starting Tornado server on %s', output)
self.wsgiserver = IOLoop.current()
self.wsgiserver.start()
# wait for stop signal
self.wsgiserver.close(True)
finally:
if self.unix_socket_file:
os.remove(self.unix_socket_file)
self.unix_socket_file = None
log.info('Starting Tornado server on %s', _readable_listen_address(self.listen_address, self.listen_port))
# Max Buffersize set to 200MB )
http_server = HTTPServer(WSGIContainer(self.app),
max_buffer_size=209700000,
ssl_options=self.ssl_args)
http_server.listen(self.listen_port, self.listen_address)
self.wsgiserver = IOLoop.current()
self.wsgiserver.start()
# wait for stop signal
self.wsgiserver.close(True)
def start(self):
try:
@ -300,16 +259,9 @@ class WebServer(object):
log.info("Performing restart of Calibre-Web")
args = self._get_args_for_reloading()
os.execv(args[0].lstrip('"').rstrip('"'), args)
subprocess.call(args, close_fds=True) # nosec
return True
@staticmethod
def shutdown_scheduler():
from .services.background_scheduler import BackgroundScheduler
scheduler = BackgroundScheduler()
if scheduler:
scheduler.scheduler.shutdown()
def _killServer(self, __, ___):
self.stop()
@ -318,14 +270,9 @@ class WebServer(object):
updater_thread.stop()
log.info("webserver stop (restart=%s)", restart)
self.shutdown_scheduler()
self.restart = restart
if self.wsgiserver:
if _GEVENT:
self.wsgiserver.close()
else:
if restart:
self.wsgiserver.call_later(1.0, self.wsgiserver.stop)
else:
self.wsgiserver.asyncio_loop.call_soon_threadsafe(self.wsgiserver.stop)
self.wsgiserver.add_callback_from_signal(self.wsgiserver.stop)

@ -19,9 +19,11 @@
import sys
from base64 import b64decode, b64encode
from jsonschema import validate, exceptions
from jsonschema import validate, exceptions, __version__
from datetime import datetime
from urllib.parse import unquote
from flask import json
from .. import logger
@ -30,10 +32,10 @@ log = logger.create()
def b64encode_json(json_data):
return b64encode(json.dumps(json_data).encode()).decode("utf-8")
return b64encode(json.dumps(json_data).encode())
# Python3 has a timestamp() method we could be calling, however it's not available in python2.
# Python3 has a timestamp() method we could be calling, however it's not avaiable in python2.
def to_epoch_timestamp(datetime_object):
return (datetime_object - datetime(1970, 1, 1)).total_seconds()
@ -47,7 +49,7 @@ def get_datetime_from_json(json_object, field_name):
class SyncToken:
""" The SyncToken is used to persist state across requests.
""" The SyncToken is used to persist state accross requests.
When serialized over the response headers, the Kobo device will propagate the token onto following
requests to the service. As an example use-case, the SyncToken is used to detect books that have been added
to the library since the last time the device synced to the server.

@ -18,10 +18,11 @@
from .. import logger
log = logger.create()
try:
from . import goodreads_support
try: from . import goodreads_support
except ImportError as err:
log.debug("Cannot import goodreads, showing authors-metadata will not work: %s", err)
goodreads_support = None

@ -1,84 +0,0 @@
# -*- coding: utf-8 -*-
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
# Copyright (C) 2020 mmonkey
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import atexit
from .. import logger
from .worker import WorkerThread
try:
from apscheduler.schedulers.background import BackgroundScheduler as BScheduler
from apscheduler.triggers.cron import CronTrigger
from apscheduler.triggers.date import DateTrigger
use_APScheduler = True
except (ImportError, RuntimeError) as e:
use_APScheduler = False
log = logger.create()
log.info('APScheduler not found. Unable to schedule tasks.')
class BackgroundScheduler:
_instance = None
def __new__(cls):
if not use_APScheduler:
return False
if cls._instance is None:
cls._instance = super(BackgroundScheduler, cls).__new__(cls)
cls.log = logger.create()
cls.scheduler = BScheduler()
cls.scheduler.start()
return cls._instance
def schedule(self, func, trigger, name=None):
if use_APScheduler:
return self.scheduler.add_job(func=func, trigger=trigger, name=name)
# Expects a lambda expression for the task
def schedule_task(self, task, user=None, name=None, hidden=False, trigger=None):
if use_APScheduler:
def scheduled_task():
worker_task = task()
worker_task.scheduled = True
WorkerThread.add(user, worker_task, hidden=hidden)
return self.schedule(func=scheduled_task, trigger=trigger, name=name)
# Expects a list of lambda expressions for the tasks
def schedule_tasks(self, tasks, user=None, trigger=None):
if use_APScheduler:
for task in tasks:
self.schedule_task(task[0], user=user, trigger=trigger, name=task[1], hidden=task[2])
# Expects a lambda expression for the task
def schedule_task_immediately(self, task, user=None, name=None, hidden=False):
if use_APScheduler:
def immediate_task():
WorkerThread.add(user, task(), hidden)
return self.schedule(func=immediate_task, trigger=DateTrigger(), name=name)
# Expects a list of lambda expressions for the tasks
def schedule_tasks_immediately(self, tasks, user=None):
if use_APScheduler:
for task in tasks:
self.schedule_task_immediately(task[0], user, name="immediately " + task[1], hidden=task[2])
# Remove all jobs
def remove_all_jobs(self):
self.scheduler.remove_all_jobs()

@ -25,7 +25,7 @@ from google.oauth2.credentials import Credentials
from datetime import datetime
import base64
from flask_babel import gettext as _
from ..constants import CONFIG_DIR
from ..constants import BASE_DIR
from .. import logger
@ -53,11 +53,11 @@ def setup_gmail(token):
if creds and creds.expired and creds.refresh_token:
creds.refresh(Request())
else:
cred_file = os.path.join(CONFIG_DIR, 'gmail.json')
cred_file = os.path.join(BASE_DIR, 'gmail.json')
if not os.path.exists(cred_file):
raise Exception(_("Found no valid gmail.json file with OAuth information"))
flow = InstalledAppFlow.from_client_secrets_file(
os.path.join(CONFIG_DIR, 'gmail.json'), SCOPES)
os.path.join(BASE_DIR, 'gmail.json'), SCOPES)
creds = flow.run_local_server(port=0)
user_info = get_user_info(creds)
return {

@ -18,49 +18,16 @@
import time
from functools import reduce
import requests
from goodreads.client import GoodreadsClient
from goodreads.request import GoodreadsRequest
import xmltodict
try:
import Levenshtein
from goodreads.client import GoodreadsClient
except ImportError:
Levenshtein = False
from .. import logger
from ..clean_html import clean_string
class my_GoodreadsClient(GoodreadsClient):
def request(self, *args, **kwargs):
"""Create a GoodreadsRequest object and make that request"""
req = my_GoodreadsRequest(self, *args, **kwargs)
return req.request()
from betterreads.client import GoodreadsClient
class GoodreadsRequestException(Exception):
def __init__(self, error_msg, url):
self.error_msg = error_msg
self.url = url
try: import Levenshtein
except ImportError: Levenshtein = False
def __str__(self):
return self.url, ':', self.error_msg
class my_GoodreadsRequest(GoodreadsRequest):
def request(self):
resp = requests.get(self.host+self.path, params=self.params,
headers={"User-Agent":"Mozilla/5.0 (X11; Linux x86_64; rv:125.0) "
"Gecko/20100101 Firefox/125.0"})
if resp.status_code != 200:
raise GoodreadsRequestException(resp.reason, self.path)
if self.req_format == 'xml':
data_dict = xmltodict.parse(resp.content)
return data_dict['GoodreadsResponse']
else:
raise Exception("Invalid format")
from .. import logger
log = logger.create()
@ -71,20 +38,20 @@ _CACHE_TIMEOUT = 23 * 60 * 60 # 23 hours (in seconds)
_AUTHORS_CACHE = {}
def connect(key=None, enabled=True):
def connect(key=None, secret=None, enabled=True):
global _client
if not enabled or not key:
if not enabled or not key or not secret:
_client = None
return
if _client:
# make sure the configuration has not changed since last we used the client
if _client.client_key != key:
if _client.client_key != key or _client.client_secret != secret:
_client = None
if not _client:
_client = my_GoodreadsClient(key, None)
_client = GoodreadsClient(key, secret)
def get_author_info(author_name):
@ -109,7 +76,6 @@ def get_author_info(author_name):
if author_info:
author_info._timestamp = now
author_info.safe_about = clean_string(author_info.about)
_AUTHORS_CACHE[author_name] = author_info
return author_info

@ -20,7 +20,6 @@ import base64
from flask_simpleldap import LDAP, LDAPException
from flask_simpleldap import ldap as pyLDAP
from flask import current_app
from .. import constants, logger
try:
@ -29,47 +28,8 @@ except ImportError:
pass
log = logger.create()
_ldap = LDAP()
class LDAPLogger(object):
def write(self, message):
try:
log.debug(message.strip("\n").replace("\n", ""))
except Exception:
log.debug("Logging Error")
class mySimpleLDap(LDAP):
@staticmethod
def init_app(app):
super(mySimpleLDap, mySimpleLDap).init_app(app)
app.config.setdefault('LDAP_LOGLEVEL', 0)
@property
def initialize(self):
"""Initialize a connection to the LDAP server.
:return: LDAP connection object.
"""
try:
log_level = 2 if current_app.config['LDAP_LOGLEVEL'] == logger.logging.DEBUG else 0
conn = pyLDAP.initialize('{0}://{1}:{2}'.format(
current_app.config['LDAP_SCHEMA'],
current_app.config['LDAP_HOST'],
current_app.config['LDAP_PORT']), trace_level=log_level, trace_file=LDAPLogger())
conn.set_option(pyLDAP.OPT_NETWORK_TIMEOUT,
current_app.config['LDAP_TIMEOUT'])
conn = self._set_custom_options(conn)
conn.protocol_version = pyLDAP.VERSION3
if current_app.config['LDAP_USE_TLS']:
conn.start_tls_s()
return conn
except pyLDAP.LDAPError as e:
raise LDAPException(self.error(e.args))
_ldap = mySimpleLDap()
def init_app(app, config):
if config.config_login_type != constants.LOGIN_LDAP:
@ -84,15 +44,15 @@ def init_app(app, config):
app.config['LDAP_SCHEMA'] = 'ldap'
if config.config_ldap_authentication > constants.LDAP_AUTH_ANONYMOUS:
if config.config_ldap_authentication > constants.LDAP_AUTH_UNAUTHENTICATE:
if config.config_ldap_serv_password_e is None:
config.config_ldap_serv_password_e = ''
app.config['LDAP_PASSWORD'] = config.config_ldap_serv_password_e
if config.config_ldap_serv_password is None:
config.config_ldap_serv_password = ''
app.config['LDAP_PASSWORD'] = base64.b64decode(config.config_ldap_serv_password)
else:
app.config['LDAP_PASSWORD'] = ""
app.config['LDAP_PASSWORD'] = base64.b64decode("")
app.config['LDAP_USERNAME'] = config.config_ldap_serv_username
else:
app.config['LDAP_USERNAME'] = ""
app.config['LDAP_PASSWORD'] = ""
app.config['LDAP_PASSWORD'] = base64.b64decode("")
if bool(config.config_ldap_cert_path):
app.config['LDAP_CUSTOM_OPTIONS'].update({
pyLDAP.OPT_X_TLS_REQUIRE_CERT: pyLDAP.OPT_X_TLS_DEMAND,
@ -110,7 +70,7 @@ def init_app(app, config):
app.config['LDAP_OPENLDAP'] = bool(config.config_ldap_openldap)
app.config['LDAP_GROUP_OBJECT_FILTER'] = config.config_ldap_group_object_filter
app.config['LDAP_GROUP_MEMBERS_FIELD'] = config.config_ldap_group_members_field
app.config['LDAP_LOGLEVEL'] = config.config_log_level
try:
_ldap.init_app(app)
except ValueError:

@ -37,13 +37,11 @@ STAT_WAITING = 0
STAT_FAIL = 1
STAT_STARTED = 2
STAT_FINISH_SUCCESS = 3
STAT_ENDED = 4
STAT_CANCELLED = 5
# Only retain this many tasks in dequeued list
TASK_CLEANUP_TRIGGER = 20
QueuedTask = namedtuple('QueuedTask', 'num, user, added, task, hidden')
QueuedTask = namedtuple('QueuedTask', 'num, user, added, task')
def _get_main_thread():
@ -53,6 +51,7 @@ def _get_main_thread():
raise Exception("main thread not found?!")
class ImprovedQueue(queue.Queue):
def to_list(self):
"""
@ -62,13 +61,12 @@ class ImprovedQueue(queue.Queue):
with self.mutex:
return list(self.queue)
# Class for all worker tasks in the background
class WorkerThread(threading.Thread):
_instance = None
@classmethod
def get_instance(cls):
def getInstance(cls):
if cls._instance is None:
cls._instance = WorkerThread()
return cls._instance
@ -84,17 +82,15 @@ class WorkerThread(threading.Thread):
self.start()
@classmethod
def add(cls, user, task, hidden=False):
ins = cls.get_instance()
def add(cls, user, task):
ins = cls.getInstance()
ins.num += 1
username = user if user is not None else 'System'
log.debug("Add Task for user: {} - {}".format(username, task))
log.debug("Add Task for user: {} - {}".format(user, task))
ins.queue.put(QueuedTask(
num=ins.num,
user=username,
user=user,
added=datetime.now(),
task=task,
hidden=hidden
))
@property
@ -115,10 +111,10 @@ class WorkerThread(threading.Thread):
if delta > TASK_CLEANUP_TRIGGER:
ret = alive
else:
# otherwise, loop off the oldest dead tasks until we hit the target trigger
ret = sorted(dead, key=lambda y: y.task.end_time)[-TASK_CLEANUP_TRIGGER:] + alive
# otherwise, lop off the oldest dead tasks until we hit the target trigger
ret = sorted(dead, key=lambda x: x.task.end_time)[-TASK_CLEANUP_TRIGGER:] + alive
self.dequeued = sorted(ret, key=lambda y: y.num)
self.dequeued = sorted(ret, key=lambda x: x.num)
# Main thread loop starting the different tasks
def run(self):
@ -145,21 +141,11 @@ class WorkerThread(threading.Thread):
# sometimes tasks (like Upload) don't actually have work to do and are created as already finished
if item.task.stat is STAT_WAITING:
# CalibreTask.start() should wrap all exceptions in its own error handling
# CalibreTask.start() should wrap all exceptions in it's own error handling
item.task.start(self)
# remove self_cleanup tasks and hidden "System Tasks" from list
if item.task.self_cleanup or item.hidden:
self.dequeued.remove(item)
self.queue.task_done()
def end_task(self, task_id):
ins = self.get_instance()
for __, __, __, task, __ in ins.tasks:
if str(task.id) == str(task_id) and task.is_cancellable:
task.stat = STAT_CANCELLED if task.stat == STAT_WAITING else STAT_ENDED
class CalibreTask:
__metaclass__ = abc.ABCMeta
@ -172,12 +158,10 @@ class CalibreTask:
self.end_time = None
self.message = message
self.id = uuid.uuid4()
self.self_cleanup = False
self._scheduled = False
@abc.abstractmethod
def run(self, worker_thread):
"""The main entry-point for this task"""
"""Provides the caller some human-readable name for this class"""
raise NotImplementedError
@abc.abstractmethod
@ -185,11 +169,6 @@ class CalibreTask:
"""Provides the caller some human-readable name for this class"""
raise NotImplementedError
@abc.abstractmethod
def is_cancellable(self):
"""Does this task gracefully handle being cancelled (STAT_ENDED, STAT_CANCELLED)?"""
raise NotImplementedError
def start(self, *args):
self.start_time = datetime.now()
self.stat = STAT_STARTED
@ -199,7 +178,7 @@ class CalibreTask:
self.run(*args)
except Exception as ex:
self._handleError(str(ex))
log.error_or_exception(ex)
log.debug_or_exception(ex)
self.end_time = datetime.now()
@ -240,23 +219,15 @@ class CalibreTask:
We have a separate dictating this because there may be certain tasks that want to override this
"""
# By default, we're good to clean a task if it's "Done"
return self.stat in (STAT_FINISH_SUCCESS, STAT_FAIL, STAT_ENDED, STAT_CANCELLED)
@property
def self_cleanup(self):
return self._self_cleanup
return self.stat in (STAT_FINISH_SUCCESS, STAT_FAIL)
@self_cleanup.setter
def self_cleanup(self, is_self_cleanup):
self._self_cleanup = is_self_cleanup
@property
def scheduled(self):
return self._scheduled
@scheduled.setter
def scheduled(self, is_scheduled):
self._scheduled = is_scheduled
'''@progress.setter
def progress(self, x):
if x > 1:
x = 1
if x < 0:
x = 0
self._progress = x'''
def _handleError(self, error_message):
self.stat = STAT_FAIL
@ -266,6 +237,3 @@ class CalibreTask:
def _handleSuccess(self):
self.stat = STAT_FINISH_SUCCESS
self.progress = 1
def __str__(self):
return self.name

@ -33,9 +33,27 @@ from . import calibre_db, config, db, logger, ub
from .render_template import render_title_template
from .usermanagement import login_required_if_no_ano
shelf = Blueprint('shelf', __name__)
log = logger.create()
shelf = Blueprint('shelf', __name__)
def check_shelf_edit_permissions(cur_shelf):
if not cur_shelf.is_public and not cur_shelf.user_id == int(current_user.id):
log.error("User {} not allowed to edit shelf: {}".format(current_user.id, cur_shelf.name))
return False
if cur_shelf.is_public and not current_user.role_edit_shelfs():
log.info("User {} not allowed to edit public shelves".format(current_user.id))
return False
return True
def check_shelf_view_permissions(cur_shelf):
if cur_shelf.is_public:
return True
if current_user.is_anonymous or cur_shelf.user_id != current_user.id:
log.error("User is unauthorized to view non-public shelf: {}".format(cur_shelf.name))
return False
return True
@shelf.route("/shelf/add/<int:shelf_id>/<int:book_id>", methods=["POST"])
@ -46,13 +64,13 @@ def add_to_shelf(shelf_id, book_id):
if shelf is None:
log.error("Invalid shelf specified: %s", shelf_id)
if not xhr:
flash(_("Invalid shelf specified"), category="error")
flash(_(u"Invalid shelf specified"), category="error")
return redirect(url_for('web.index'))
return "Invalid shelf specified", 400
if not check_shelf_edit_permissions(shelf):
if not xhr:
flash(_("Sorry you are not allowed to add a book to that shelf"), category="error")
flash(_(u"Sorry you are not allowed to add a book to that shelf"), category="error")
return redirect(url_for('web.index'))
return "Sorry you are not allowed to add a book to the that shelf", 403
@ -61,7 +79,7 @@ def add_to_shelf(shelf_id, book_id):
if book_in_shelf:
log.error("Book %s is already part of %s", book_id, shelf)
if not xhr:
flash(_("Book is already part of the shelf: %(shelfname)s", shelfname=shelf.name), category="error")
flash(_(u"Book is already part of the shelf: %(shelfname)s", shelfname=shelf.name), category="error")
return redirect(url_for('web.index'))
return "Book is already part of the shelf: %s" % shelf.name, 400
@ -71,30 +89,22 @@ def add_to_shelf(shelf_id, book_id):
else:
maxOrder = maxOrder[0]
if not calibre_db.session.query(db.Books).filter(db.Books.id == book_id).one_or_none():
log.error("Invalid Book Id: %s. Could not be added to shelf %s", book_id, shelf.name)
if not xhr:
flash(_("%(book_id)s is a invalid Book Id. Could not be added to Shelf", book_id=book_id),
category="error")
return redirect(url_for('web.index'))
return "%s is a invalid Book Id. Could not be added to Shelf" % book_id, 400
shelf.books.append(ub.BookShelf(shelf=shelf.id, book_id=book_id, order=maxOrder + 1))
shelf.last_modified = datetime.utcnow()
try:
ub.session.merge(shelf)
ub.session.commit()
except (OperationalError, InvalidRequestError) as e:
except (OperationalError, InvalidRequestError):
ub.session.rollback()
log.error_or_exception("Settings Database error: {}".format(e))
flash(_("Oops! Database Error: %(error)s.", error=e.orig), category="error")
log.error("Settings DB is not Writeable")
flash(_(u"Settings DB is not Writeable"), category="error")
if "HTTP_REFERER" in request.environ:
return redirect(request.environ["HTTP_REFERER"])
else:
return redirect(url_for('web.index'))
if not xhr:
log.debug("Book has been added to shelf: {}".format(shelf.name))
flash(_("Book has been added to shelf: %(sname)s", sname=shelf.name), category="success")
flash(_(u"Book has been added to shelf: %(sname)s", sname=shelf.name), category="success")
if "HTTP_REFERER" in request.environ:
return redirect(request.environ["HTTP_REFERER"])
else:
@ -108,12 +118,12 @@ def search_to_shelf(shelf_id):
shelf = ub.session.query(ub.Shelf).filter(ub.Shelf.id == shelf_id).first()
if shelf is None:
log.error("Invalid shelf specified: {}".format(shelf_id))
flash(_("Invalid shelf specified"), category="error")
flash(_(u"Invalid shelf specified"), category="error")
return redirect(url_for('web.index'))
if not check_shelf_edit_permissions(shelf):
log.warning("You are not allowed to add a book to the shelf".format(shelf.name))
flash(_("You are not allowed to add a book to the shelf"), category="error")
flash(_(u"You are not allowed to add a book to the shelf"), category="error")
return redirect(url_for('web.index'))
if current_user.id in ub.searched_ids and ub.searched_ids[current_user.id]:
@ -131,7 +141,7 @@ def search_to_shelf(shelf_id):
if not books_for_shelf:
log.error("Books are already part of {}".format(shelf.name))
flash(_("Books are already part of the shelf: %(name)s", name=shelf.name), category="error")
flash(_(u"Books are already part of the shelf: %(name)s", name=shelf.name), category="error")
return redirect(url_for('web.index'))
maxOrder = ub.session.query(func.max(ub.BookShelf.order)).filter(ub.BookShelf.shelf == shelf_id).first()[0] or 0
@ -143,14 +153,14 @@ def search_to_shelf(shelf_id):
try:
ub.session.merge(shelf)
ub.session.commit()
flash(_("Books have been added to shelf: %(sname)s", sname=shelf.name), category="success")
except (OperationalError, InvalidRequestError) as e:
flash(_(u"Books have been added to shelf: %(sname)s", sname=shelf.name), category="success")
except (OperationalError, InvalidRequestError):
ub.session.rollback()
log.error_or_exception("Settings Database error: {}".format(e))
flash(_("Oops! Database Error: %(error)s.", error=e.orig), category="error")
log.error("Settings DB is not Writeable")
flash(_("Settings DB is not Writeable"), category="error")
else:
log.error("Could not add books to shelf: {}".format(shelf.name))
flash(_("Could not add books to shelf: %(sname)s", sname=shelf.name), category="error")
flash(_(u"Could not add books to shelf: %(sname)s", sname=shelf.name), category="error")
return redirect(url_for('web.index'))
@ -187,16 +197,16 @@ def remove_from_shelf(shelf_id, book_id):
ub.session.delete(book_shelf)
shelf.last_modified = datetime.utcnow()
ub.session.commit()
except (OperationalError, InvalidRequestError) as e:
except (OperationalError, InvalidRequestError):
ub.session.rollback()
log.error_or_exception("Settings Database error: {}".format(e))
flash(_("Oops! Database Error: %(error)s.", error=e.orig), category="error")
log.error("Settings DB is not Writeable")
flash(_("Settings DB is not Writeable"), category="error")
if "HTTP_REFERER" in request.environ:
return redirect(request.environ["HTTP_REFERER"])
else:
return redirect(url_for('web.index'))
if not xhr:
flash(_("Book has been removed from shelf: %(sname)s", sname=shelf.name), category="success")
flash(_(u"Book has been removed from shelf: %(sname)s", sname=shelf.name), category="success")
if "HTTP_REFERER" in request.environ:
return redirect(request.environ["HTTP_REFERER"])
else:
@ -205,7 +215,7 @@ def remove_from_shelf(shelf_id, book_id):
else:
if not xhr:
log.warning("You are not allowed to remove a book from shelf: {}".format(shelf.name))
flash(_("Sorry you are not allowed to remove a book from this shelf"),
flash(_(u"Sorry you are not allowed to remove a book from this shelf"),
category="error")
return redirect(url_for('web.index'))
return "Sorry you are not allowed to remove a book from this shelf", 403
@ -215,7 +225,7 @@ def remove_from_shelf(shelf_id, book_id):
@login_required
def create_shelf():
shelf = ub.Shelf()
return create_edit_shelf(shelf, page_title=_("Create a Shelf"), page="shelfcreate")
return create_edit_shelf(shelf, page_title=_(u"Create a Shelf"), page="shelfcreate")
@shelf.route("/shelf/edit/<int:shelf_id>", methods=["GET", "POST"])
@ -223,95 +233,9 @@ def create_shelf():
def edit_shelf(shelf_id):
shelf = ub.session.query(ub.Shelf).filter(ub.Shelf.id == shelf_id).first()
if not check_shelf_edit_permissions(shelf):
flash(_("Sorry you are not allowed to edit this shelf"), category="error")
flash(_(u"Sorry you are not allowed to edit this shelf"), category="error")
return redirect(url_for('web.index'))
return create_edit_shelf(shelf, page_title=_("Edit a shelf"), page="shelfedit", shelf_id=shelf_id)
@shelf.route("/shelf/delete/<int:shelf_id>", methods=["POST"])
@login_required
def delete_shelf(shelf_id):
cur_shelf = ub.session.query(ub.Shelf).filter(ub.Shelf.id == shelf_id).first()
try:
if not delete_shelf_helper(cur_shelf):
flash(_("Error deleting Shelf"), category="error")
else:
flash(_("Shelf successfully deleted"), category="success")
except InvalidRequestError as e:
ub.session.rollback()
log.error_or_exception("Settings Database error: {}".format(e))
flash(_("Oops! Database Error: %(error)s.", error=e.orig), category="error")
return redirect(url_for('web.index'))
@shelf.route("/simpleshelf/<int:shelf_id>")
@login_required_if_no_ano
def show_simpleshelf(shelf_id):
return render_show_shelf(2, shelf_id, 1, None)
@shelf.route("/shelf/<int:shelf_id>", defaults={"sort_param": "order", 'page': 1})
@shelf.route("/shelf/<int:shelf_id>/<sort_param>", defaults={'page': 1})
@shelf.route("/shelf/<int:shelf_id>/<sort_param>/<int:page>")
@login_required_if_no_ano
def show_shelf(shelf_id, sort_param, page):
return render_show_shelf(1, shelf_id, page, sort_param)
@shelf.route("/shelf/order/<int:shelf_id>", methods=["GET", "POST"])
@login_required
def order_shelf(shelf_id):
shelf = ub.session.query(ub.Shelf).filter(ub.Shelf.id == shelf_id).first()
if shelf and check_shelf_view_permissions(shelf):
if request.method == "POST":
to_save = request.form.to_dict()
books_in_shelf = ub.session.query(ub.BookShelf).filter(ub.BookShelf.shelf == shelf_id).order_by(
ub.BookShelf.order.asc()).all()
counter = 0
for book in books_in_shelf:
setattr(book, 'order', to_save[str(book.book_id)])
counter += 1
# if order different from before -> shelf.last_modified = datetime.utcnow()
try:
ub.session.commit()
except (OperationalError, InvalidRequestError) as e:
ub.session.rollback()
log.error_or_exception("Settings Database error: {}".format(e))
flash(_("Oops! Database Error: %(error)s.", error=e.orig), category="error")
result = list()
if shelf:
result = calibre_db.session.query(db.Books) \
.join(ub.BookShelf, ub.BookShelf.book_id == db.Books.id, isouter=True) \
.add_columns(calibre_db.common_filters().label("visible")) \
.filter(ub.BookShelf.shelf == shelf_id).order_by(ub.BookShelf.order.asc()).all()
return render_title_template('shelf_order.html', entries=result,
title=_("Change order of Shelf: '%(name)s'", name=shelf.name),
shelf=shelf, page="shelforder")
else:
abort(404)
def check_shelf_edit_permissions(cur_shelf):
if not cur_shelf.is_public and not cur_shelf.user_id == int(current_user.id):
log.error("User {} not allowed to edit shelf: {}".format(current_user.id, cur_shelf.name))
return False
if cur_shelf.is_public and not current_user.role_edit_shelfs():
log.info("User {} not allowed to edit public shelves".format(current_user.id))
return False
return True
def check_shelf_view_permissions(cur_shelf):
try:
if cur_shelf.is_public:
return True
if current_user.is_anonymous or cur_shelf.user_id != current_user.id:
log.error("User is unauthorized to view non-public shelf: {}".format(cur_shelf.name))
return False
except Exception as e:
log.error(e)
return True
return create_edit_shelf(shelf, page_title=_(u"Edit a shelf"), page="shelfedit", shelf_id=shelf_id)
# if shelf ID is set, we are editing a shelf
@ -321,7 +245,7 @@ def create_edit_shelf(shelf, page_title, page, shelf_id=False):
if request.method == "POST":
to_save = request.form.to_dict()
if not current_user.role_edit_shelfs() and to_save.get("is_public") == "on":
flash(_("Sorry you are not allowed to create a public shelf"), category="error")
flash(_(u"Sorry you are not allowed to create a public shelf"), category="error")
return redirect(url_for('web.index'))
is_public = 1 if to_save.get("is_public") == "on" else 0
if config.config_kobo_sync:
@ -331,31 +255,31 @@ def create_edit_shelf(shelf, page_title, page, shelf_id=False):
ub.ShelfArchive.uuid == shelf.uuid).delete()
ub.session_commit()
shelf_title = to_save.get("title", "")
if check_shelf_is_unique(shelf_title, is_public, shelf_id):
if check_shelf_is_unique(shelf, shelf_title, is_public, shelf_id):
shelf.name = shelf_title
shelf.is_public = is_public
if not shelf_id:
shelf.user_id = int(current_user.id)
ub.session.add(shelf)
shelf_action = "created"
flash_text = _("Shelf %(title)s created", title=shelf_title)
flash_text = _(u"Shelf %(title)s created", title=shelf_title)
else:
shelf_action = "changed"
flash_text = _("Shelf %(title)s changed", title=shelf_title)
flash_text = _(u"Shelf %(title)s changed", title=shelf_title)
try:
ub.session.commit()
log.info("Shelf {} {}".format(shelf_title, shelf_action))
log.info(u"Shelf {} {}".format(shelf_title, shelf_action))
flash(flash_text, category="success")
return redirect(url_for('shelf.show_shelf', shelf_id=shelf.id))
except (OperationalError, InvalidRequestError) as ex:
ub.session.rollback()
log.error_or_exception(ex)
log.error_or_exception("Settings Database error: {}".format(ex))
flash(_("Oops! Database Error: %(error)s.", error=ex.orig), category="error")
log.debug_or_exception(ex)
log.error("Settings DB is not Writeable")
flash(_("Settings DB is not Writeable"), category="error")
except Exception as ex:
ub.session.rollback()
log.error_or_exception(ex)
flash(_("There was an error"), category="error")
log.debug_or_exception(ex)
flash(_(u"There was an error"), category="error")
return render_title_template('shelf_edit.html',
shelf=shelf,
title=page_title,
@ -364,7 +288,7 @@ def create_edit_shelf(shelf, page_title, page, shelf_id=False):
sync_only_selected_shelves=sync_only_selected_shelves)
def check_shelf_is_unique(title, is_public, shelf_id=False):
def check_shelf_is_unique(shelf, title, is_public, shelf_id=False):
if shelf_id:
ident = ub.Shelf.id != shelf_id
else:
@ -377,7 +301,7 @@ def check_shelf_is_unique(title, is_public, shelf_id=False):
if not is_shelf_name_unique:
log.error("A public shelf with the name '{}' already exists.".format(title))
flash(_("A public shelf with the name '%(title)s' already exists.", title=title),
flash(_(u"A public shelf with the name '%(title)s' already exists.", title=title),
category="error")
else:
is_shelf_name_unique = ub.session.query(ub.Shelf) \
@ -388,7 +312,7 @@ def check_shelf_is_unique(title, is_public, shelf_id=False):
if not is_shelf_name_unique:
log.error("A private shelf with the name '{}' already exists.".format(title))
flash(_("A private shelf with the name '%(title)s' already exists.", title=title),
flash(_(u"A private shelf with the name '%(title)s' already exists.", title=title),
category="error")
return is_shelf_name_unique
@ -404,6 +328,70 @@ def delete_shelf_helper(cur_shelf):
return True
@shelf.route("/shelf/delete/<int:shelf_id>", methods=["POST"])
@login_required
def delete_shelf(shelf_id):
cur_shelf = ub.session.query(ub.Shelf).filter(ub.Shelf.id == shelf_id).first()
try:
if not delete_shelf_helper(cur_shelf):
flash(_("Error deleting Shelf"), category="error")
else:
flash(_("Shelf successfully deleted"), category="success")
except InvalidRequestError:
ub.session.rollback()
log.error("Settings DB is not Writeable")
flash(_("Settings DB is not Writeable"), category="error")
return redirect(url_for('web.index'))
@shelf.route("/simpleshelf/<int:shelf_id>")
@login_required_if_no_ano
def show_simpleshelf(shelf_id):
return render_show_shelf(2, shelf_id, 1, None)
@shelf.route("/shelf/<int:shelf_id>", defaults={"sort_param": "order", 'page': 1})
@shelf.route("/shelf/<int:shelf_id>/<sort_param>", defaults={'page': 1})
@shelf.route("/shelf/<int:shelf_id>/<sort_param>/<int:page>")
@login_required_if_no_ano
def show_shelf(shelf_id, sort_param, page):
return render_show_shelf(1, shelf_id, page, sort_param)
@shelf.route("/shelf/order/<int:shelf_id>", methods=["GET", "POST"])
@login_required
def order_shelf(shelf_id):
shelf = ub.session.query(ub.Shelf).filter(ub.Shelf.id == shelf_id).first()
if shelf and check_shelf_view_permissions(shelf):
if request.method == "POST":
to_save = request.form.to_dict()
books_in_shelf = ub.session.query(ub.BookShelf).filter(ub.BookShelf.shelf == shelf_id).order_by(
ub.BookShelf.order.asc()).all()
counter = 0
for book in books_in_shelf:
setattr(book, 'order', to_save[str(book.book_id)])
counter += 1
# if order diffrent from before -> shelf.last_modified = datetime.utcnow()
try:
ub.session.commit()
except (OperationalError, InvalidRequestError):
ub.session.rollback()
log.error("Settings DB is not Writeable")
flash(_("Settings DB is not Writeable"), category="error")
result = list()
if shelf:
result = calibre_db.session.query(db.Books) \
.join(ub.BookShelf, ub.BookShelf.book_id == db.Books.id, isouter=True) \
.add_columns(calibre_db.common_filters().label("visible")) \
.filter(ub.BookShelf.shelf == shelf_id).order_by(ub.BookShelf.order.asc()).all()
return render_title_template('shelf_order.html', entries=result,
title=_(u"Change order of Shelf: '%(name)s'", name=shelf.name),
shelf=shelf, page="shelforder")
else:
abort(404)
def change_shelf_order(shelf_id, order):
result = calibre_db.session.query(db.Books).outerjoin(db.books_series_link,
db.Books.id == db.books_series_link.c.book)\
@ -451,7 +439,7 @@ def render_show_shelf(shelf_type, shelf_id, page_no, sort_param):
db.Books,
ub.BookShelf.shelf == shelf_id,
[ub.BookShelf.order.asc()],
True, config.config_read_column,
False, 0,
ub.BookShelf, ub.BookShelf.book_id == db.Books.id)
# delete chelf entries where book is not existent anymore, can happen if book is deleted outside calibre-web
wrong_entries = calibre_db.session.query(ub.BookShelf) \
@ -462,17 +450,17 @@ def render_show_shelf(shelf_type, shelf_id, page_no, sort_param):
try:
ub.session.query(ub.BookShelf).filter(ub.BookShelf.book_id == entry.book_id).delete()
ub.session.commit()
except (OperationalError, InvalidRequestError) as e:
except (OperationalError, InvalidRequestError):
ub.session.rollback()
log.error_or_exception("Settings Database error: {}".format(e))
flash(_("Oops! Database Error: %(error)s.", error=e.orig), category="error")
log.error("Settings DB is not Writeable")
flash(_("Settings DB is not Writeable"), category="error")
return render_title_template(page,
entries=result,
pagination=pagination,
title=_("Shelf: '%(name)s'", name=shelf.name),
title=_(u"Shelf: '%(name)s'", name=shelf.name),
shelf=shelf,
page="shelf")
else:
flash(_("Error opening shelf. Shelf does not exist or is not accessible"), category="error")
flash(_(u"Error opening shelf. Shelf does not exist or is not accessible"), category="error")
return redirect(url_for("web.index"))

@ -3290,13 +3290,10 @@ div.btn-group[role=group][aria-label="Download, send to Kindle, reading"] .dropd
-ms-transform-origin: center top;
transform-origin: center top;
border: 0;
overflow-y: auto;
}
.dropdown-menu:not(.datepicker-dropdown):not(.profileDropli) {
left: 0 !important;
overflow-y: auto;
}
#add-to-shelves {
min-height: 48px;
max-height: calc(100% - 120px);
overflow-y: auto;
}
@ -4426,6 +4423,38 @@ body.advanced_search > div.container-fluid > div.row-fluid > div.col-sm-10 > div
left: 49px;
margin-top: 5px
}
body:not(.blur) > .navbar > .container-fluid > .navbar-header:after, body:not(.blur) > .navbar > .container-fluid > .navbar-header:before {
color: hsla(0, 0%, 100%, .7);
cursor: pointer;
display: block;
font-family: plex-icons-new, serif;
font-size: 20px;
font-stretch: 100%;
font-style: normal;
font-variant-caps: normal;
font-variant-east-asian: normal;
font-variant-numeric: normal;
font-weight: 400;
height: 60px;
letter-spacing: normal;
line-height: 60px;
position: absolute
}
body:not(.blur) > .navbar > .container-fluid > .navbar-header:before {
content: "\EA30";
-webkit-font-variant-ligatures: normal;
font-variant-ligatures: normal;
left: 20px
}
body:not(.blur) > .navbar > .container-fluid > .navbar-header:after {
content: "\EA2F";
-webkit-font-variant-ligatures: normal;
font-variant-ligatures: normal;
left: 60px
}
}
body.admin > div.container-fluid > div > div.col-sm-10 > div.container-fluid > div.row:first-of-type > div.col > h2:before, body.admin > div.container-fluid > div > div.col-sm-10 > div.discover > h2:first-of-type:before, body.edituser.admin > div.container-fluid > div.row-fluid > div.col-sm-10 > div.discover > h1:before, body.newuser.admin > div.container-fluid > div.row-fluid > div.col-sm-10 > div.discover > h1:before {
@ -4813,14 +4842,8 @@ body.advsearch:not(.blur) > div.container-fluid > div.row-fluid > div.col-sm-10
z-index: 999999999999999999999999999999999999
}
body.search #shelf-actions button#add-to-shelf {
height: 40px;
}
@media screen and (max-width: 767px) {
body.search .discover, body.advsearch .discover {
display: flex;
flex-direction: column;
}
.search #shelf-actions, body.login .home-btn {
display: none
}
body.read:not(.blur) a[href*=readbooks] {
@ -5127,7 +5150,7 @@ body.login > div.navbar.navbar-default.navbar-static-top > div > div.navbar-head
pointer-events: none
}
#DeleteDomain:hover:before, #RestartDialog:hover:before, #ShutdownDialog:hover:before, #StatusDialog:hover:before, #deleteButton, #deleteModal:hover:before, #cancelTaskModal:hover:before, body.mailset > div.container-fluid > div > div.col-sm-10 > div.discover td > a:hover {
#DeleteDomain:hover:before, #RestartDialog:hover:before, #ShutdownDialog:hover:before, #StatusDialog:hover:before, #deleteButton, #deleteModal:hover:before, body.mailset > div.container-fluid > div > div.col-sm-10 > div.discover td > a:hover {
cursor: pointer
}
@ -5141,7 +5164,7 @@ body.login > div.navbar.navbar-default.navbar-static-top > div > div.navbar-head
right: 5px
}
body:not(.search) #shelf-actions > .btn-group.open, .downloadBtn.open, .profileDrop[aria-expanded=true] {
#shelf-actions > .btn-group.open, .downloadBtn.open, .profileDrop[aria-expanded=true] {
pointer-events: none
}
@ -5158,7 +5181,7 @@ body:not(.search) #shelf-actions > .btn-group.open, .downloadBtn.open, .profileD
color: var(--color-primary)
}
body:not(.search) #shelf-actions, body:not(.search) #shelf-actions > .btn-group, body:not(.search) #shelf-actions > .btn-group > .empty-ul {
#shelf-actions, #shelf-actions > .btn-group, #shelf-actions > .btn-group > .empty-ul {
pointer-events: none
}
@ -5214,11 +5237,7 @@ body.admin > div.container-fluid > div > div.col-sm-10 > div.container-fluid > d
margin-bottom: 20px
}
body.admin > div.container-fluid div.scheduled_tasks_details {
margin-bottom: 20px
}
body.admin .btn-default {
body.admin:not(.modal-open) .btn-default {
margin-bottom: 10px
}
@ -5449,7 +5468,7 @@ body.admin.modal-open .navbar {
z-index: 0 !important
}
#RestartDialog, #ShutdownDialog, #StatusDialog, #deleteModal, #cancelTaskModal {
#RestartDialog, #ShutdownDialog, #StatusDialog, #deleteModal {
top: 0;
overflow: hidden;
padding-top: 70px;
@ -5459,7 +5478,7 @@ body.admin.modal-open .navbar {
background: rgba(0, 0, 0, .5)
}
#RestartDialog:before, #ShutdownDialog:before, #StatusDialog:before, #deleteModal:before, #cancelTaskModal:before {
#RestartDialog:before, #ShutdownDialog:before, #StatusDialog:before, #deleteModal:before {
content: "\E208";
padding-right: 10px;
display: block;
@ -5481,18 +5500,18 @@ body.admin.modal-open .navbar {
z-index: 99
}
#RestartDialog.in:before, #ShutdownDialog.in:before, #StatusDialog.in:before, #deleteModal.in:before, #cancelTaskModal.in:before {
#RestartDialog.in:before, #ShutdownDialog.in:before, #StatusDialog.in:before, #deleteModal.in:before {
-webkit-transform: translate(0, 0);
-ms-transform: translate(0, 0);
transform: translate(0, 0)
}
#RestartDialog > .modal-dialog, #ShutdownDialog > .modal-dialog, #StatusDialog > .modal-dialog, #deleteModal > .modal-dialog, #cancelTaskModal > .modal-dialog {
#RestartDialog > .modal-dialog, #ShutdownDialog > .modal-dialog, #StatusDialog > .modal-dialog, #deleteModal > .modal-dialog {
width: 450px;
margin: auto
}
#RestartDialog > .modal-dialog > .modal-content, #ShutdownDialog > .modal-dialog > .modal-content, #StatusDialog > .modal-dialog > .modal-content, #deleteModal > .modal-dialog > .modal-content, #cancelTaskModal > .modal-dialog > .modal-content {
#RestartDialog > .modal-dialog > .modal-content, #ShutdownDialog > .modal-dialog > .modal-content, #StatusDialog > .modal-dialog > .modal-content, #deleteModal > .modal-dialog > .modal-content {
max-height: calc(100% - 90px);
-webkit-box-shadow: 0 5px 15px rgba(0, 0, 0, .5);
box-shadow: 0 5px 15px rgba(0, 0, 0, .5);
@ -5503,7 +5522,7 @@ body.admin.modal-open .navbar {
width: 450px
}
#RestartDialog > .modal-dialog > .modal-content > .modal-header, #ShutdownDialog > .modal-dialog > .modal-content > .modal-header, #StatusDialog > .modal-dialog > .modal-content > .modal-header, #deleteModal > .modal-dialog > .modal-content > .modal-header, #cancelTaskModal > .modal-dialog > .modal-content > .modal-header {
#RestartDialog > .modal-dialog > .modal-content > .modal-header, #ShutdownDialog > .modal-dialog > .modal-content > .modal-header, #StatusDialog > .modal-dialog > .modal-content > .modal-header, #deleteModal > .modal-dialog > .modal-content > .modal-header {
padding: 15px 20px;
border-radius: 3px 3px 0 0;
line-height: 1.71428571;
@ -5516,7 +5535,7 @@ body.admin.modal-open .navbar {
text-align: left
}
#RestartDialog > .modal-dialog > .modal-content > .modal-header:before, #ShutdownDialog > .modal-dialog > .modal-content > .modal-header:before, #StatusDialog > .modal-dialog > .modal-content > .modal-header:before, #deleteModal > .modal-dialog > .modal-content > .modal-header:before, #cancelTaskModal > .modal-dialog > .modal-content > .modal-header:before {
#RestartDialog > .modal-dialog > .modal-content > .modal-header:before, #ShutdownDialog > .modal-dialog > .modal-content > .modal-header:before, #StatusDialog > .modal-dialog > .modal-content > .modal-header:before, #deleteModal > .modal-dialog > .modal-content > .modal-header:before {
padding-right: 10px;
font-size: 18px;
color: #999;
@ -5545,11 +5564,6 @@ body.admin.modal-open .navbar {
font-family: plex-icons-new, serif
}
#cancelTaskModal > .modal-dialog > .modal-content > .modal-header:before {
content: "\EA6D";
font-family: plex-icons-new, serif
}
#RestartDialog > .modal-dialog > .modal-content > .modal-header:after {
content: "Restart Calibre-Web";
display: inline-block;
@ -5574,13 +5588,7 @@ body.admin.modal-open .navbar {
font-size: 20px
}
#cancelTaskModal > .modal-dialog > .modal-content > .modal-header:after {
content: "Delete Book";
display: inline-block;
font-size: 20px
}
#StatusDialog > .modal-dialog > .modal-content > .modal-header > span, #deleteModal > .modal-dialog > .modal-content > .modal-header > span, #cancelTaskModal > .modal-dialog > .modal-content > .modal-header > span, #loader > center > img, .rating-mobile {
#StatusDialog > .modal-dialog > .modal-content > .modal-header > span, #deleteModal > .modal-dialog > .modal-content > .modal-header > span, #loader > center > img, .rating-mobile {
display: none
}
@ -5594,7 +5602,7 @@ body.admin.modal-open .navbar {
text-align: left
}
#ShutdownDialog > .modal-dialog > .modal-content > .modal-body, #StatusDialog > .modal-dialog > .modal-content > .modal-body, #deleteModal > .modal-dialog > .modal-content > .modal-body, #cancelTaskModal > .modal-dialog > .modal-content > .modal-body {
#ShutdownDialog > .modal-dialog > .modal-content > .modal-body, #StatusDialog > .modal-dialog > .modal-content > .modal-body, #deleteModal > .modal-dialog > .modal-content > .modal-body {
padding: 20px 20px 40px;
font-size: 16px;
line-height: 1.6em;
@ -5604,7 +5612,7 @@ body.admin.modal-open .navbar {
text-align: left
}
#RestartDialog > .modal-dialog > .modal-content > .modal-body > p, #ShutdownDialog > .modal-dialog > .modal-content > .modal-body > p, #StatusDialog > .modal-dialog > .modal-content > .modal-body > p, #deleteModal > .modal-dialog > .modal-content > .modal-body > p, #cancelTaskModal > .modal-dialog > .modal-content > .modal-body > p {
#RestartDialog > .modal-dialog > .modal-content > .modal-body > p, #ShutdownDialog > .modal-dialog > .modal-content > .modal-body > p, #StatusDialog > .modal-dialog > .modal-content > .modal-body > p, #deleteModal > .modal-dialog > .modal-content > .modal-body > p {
padding: 20px 20px 0 0;
font-size: 16px;
line-height: 1.6em;
@ -5613,7 +5621,7 @@ body.admin.modal-open .navbar {
background: #282828
}
#RestartDialog > .modal-dialog > .modal-content > .modal-body > .btn-default:not(#restart), #ShutdownDialog > .modal-dialog > .modal-content > .modal-body > .btn-default:not(#shutdown), #deleteModal > .modal-dialog > .modal-content > .modal-footer > .btn-default, #cancelTaskModal > .modal-dialog > .modal-content > .modal-footer > .btn-default {
#RestartDialog > .modal-dialog > .modal-content > .modal-body > .btn-default:not(#restart), #ShutdownDialog > .modal-dialog > .modal-content > .modal-body > .btn-default:not(#shutdown), #deleteModal > .modal-dialog > .modal-content > .modal-footer > .btn-default {
float: right;
z-index: 9;
position: relative;
@ -5661,18 +5669,6 @@ body.admin.modal-open .navbar {
border-radius: 3px
}
#cancelTaskModal > .modal-dialog > .modal-content > .modal-footer > .btn-danger {
float: right;
z-index: 9;
position: relative;
margin: 0 0 0 10px;
min-width: 80px;
padding: 10px 18px;
font-size: 16px;
line-height: 1.33;
border-radius: 3px
}
#RestartDialog > .modal-dialog > .modal-content > .modal-body > .btn-default:not(#restart) {
margin: 25px 0 0 10px
}
@ -5685,11 +5681,7 @@ body.admin.modal-open .navbar {
margin: 0 0 0 10px
}
#cancelTaskModal > .modal-dialog > .modal-content > .modal-footer > .btn-default {
margin: 0 0 0 10px
}
#RestartDialog > .modal-dialog > .modal-content > .modal-body > .btn-default:not(#restart):hover, #ShutdownDialog > .modal-dialog > .modal-content > .modal-body > .btn-default:not(#shutdown):hover, #deleteModal > .modal-dialog > .modal-content > .modal-footer > .btn-default:hover, #cancelTaskModal > .modal-dialog > .modal-content > .modal-footer > .btn-default:hover {
#RestartDialog > .modal-dialog > .modal-content > .modal-body > .btn-default:not(#restart):hover, #ShutdownDialog > .modal-dialog > .modal-content > .modal-body > .btn-default:not(#shutdown):hover, #deleteModal > .modal-dialog > .modal-content > .modal-footer > .btn-default:hover {
background-color: hsla(0, 0%, 100%, .3)
}
@ -7286,11 +7278,6 @@ body.edituser.admin > div.container-fluid > div.row-fluid > div.col-sm-10 > div.
float: right
}
body.blur #main-nav + #scnd-nav .create-shelf, body.blur #main-nav + .col-sm-2 #scnd-nav .create-shelf {
float: none;
margin: 5px 0 10px -10px;
}
#main-nav + #scnd-nav .nav-head.hidden-xs {
display: list-item !important;
width: 225px
@ -7316,11 +7303,11 @@ body.edituser.admin > div.container-fluid > div.row-fluid > div.col-sm-10 > div.
background-color: transparent !important
}
#RestartDialog > .modal-dialog, #ShutdownDialog > .modal-dialog, #StatusDialog > .modal-dialog, #deleteModal > .modal-dialog, #cancelTaskModal > .modal-dialog {
#RestartDialog > .modal-dialog, #ShutdownDialog > .modal-dialog, #StatusDialog > .modal-dialog, #deleteModal > .modal-dialog {
max-width: calc(100vw - 40px)
}
#RestartDialog > .modal-dialog > .modal-content, #ShutdownDialog > .modal-dialog > .modal-content, #StatusDialog > .modal-dialog > .modal-content, #deleteModal > .modal-dialog > .modal-content, #cancelTaskModal > .modal-dialog > .modal-content {
#RestartDialog > .modal-dialog > .modal-content, #ShutdownDialog > .modal-dialog > .modal-content, #StatusDialog > .modal-dialog > .modal-content, #deleteModal > .modal-dialog > .modal-content {
max-width: calc(100vw - 40px);
left: 0
}
@ -7470,7 +7457,7 @@ body.edituser.admin > div.container-fluid > div.row-fluid > div.col-sm-10 > div.
padding: 30px 15px
}
#RestartDialog.in:before, #ShutdownDialog.in:before, #StatusDialog.in:before, #deleteModal.in:before, #cancelTaskModal.in:before {
#RestartDialog.in:before, #ShutdownDialog.in:before, #StatusDialog.in:before, #deleteModal.in:before {
left: auto;
right: 34px
}

@ -22,7 +22,3 @@ body.serieslist.grid-view div.container-fluid > div > div.col-sm-10::before {
padding: 0 0;
line-height: 15px;
}
input.datepicker {color: transparent}
input.datepicker:focus {color: transparent}
input.datepicker:focus + input {color: #555}

@ -1,19 +0,0 @@
.lightTheme {
background: #fff;
color: #000;
}
.darkTheme {
background: #202124;
color: #fff
}
.sepiaTheme {
background: #ece1ca;
color: #000;
}
.blackTheme {
background: #000;
color: #fff
}

@ -149,20 +149,6 @@ body {
word-wrap: break-word;
}
#mainContent > canvas {
display: block;
margin-left: auto;
margin-right: auto;
}
.long-strip > .mainImage {
margin-bottom: 4px;
}
.long-strip > .mainImage:last-child {
margin-bottom: 0px !important;
}
#titlebar {
min-height: 25px;
height: auto;

@ -0,0 +1,6 @@
<!-- This Source Code Form is subject to the terms of the Mozilla Public
- License, v. 2.0. If a copy of the MPL was not distributed with this
- file, You can obtain one at http://mozilla.org/MPL/2.0/. -->
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" viewBox="0 0 16 16"
fill="rgba(255,255,255,1)"><path d="M8 12a1 1 0 0 1-.707-.293l-5-5a1 1 0 0 1 1.414-1.414L8
9.586l4.293-4.293a1 1 0 0 1 1.414 1.414l-5 5A1 1 0 0 1 8 12z"></path></svg>

After

Width:  |  Height:  |  Size: 461 B

@ -0,0 +1,5 @@
<!-- This Source Code Form is subject to the terms of the Mozilla Public
- License, v. 2.0. If a copy of the MPL was not distributed with this
- file, You can obtain one at http://mozilla.org/MPL/2.0/. -->
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" viewBox="0 0 16 16"
fill="rgba(255,255,255,1)"><path d="M13 11a1 1 0 0 1-.707-.293L8 6.414l-4.293 4.293a1 1 0 0 1-1.414-1.414l5-5a1 1 0 0 1 1.414 0l5 5A1 1 0 0 1 13 11z"></path></svg>

After

Width:  |  Height:  |  Size: 458 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 326 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 326 B

@ -0,0 +1,16 @@
<!-- This Source Code Form is subject to the terms of the Mozilla Public
- License, v. 2.0. If a copy of the MPL was not distributed with this
- file, You can obtain one at http://mozilla.org/MPL/2.0/. -->
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" viewBox="0 0 16
16"
fill="rgba(255,255,255,1)">
<path
d="M8 16a8 8 0 1 1 8-8 8.009 8.009 0 0 1-8 8zM8 2a6 6 0 1 0 6 6 6.006 6.006 0 0 0-6-6z">
</path>
<path
d="M8 7a1 1 0 0 0-1 1v3a1 1 0 0 0 2 0V8a1 1 0 0 0-1-1z">
</path>
<circle
cx="8" cy="5" r="1.188">
</circle>
</svg>

After

Width:  |  Height:  |  Size: 557 B

@ -0,0 +1,2 @@
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16"
fill="rgba(255,255,255,1)"><path d="M13 13c-.3 0-.5-.1-.7-.3L8 8.4l-4.3 4.3c-.9.9-2.3-.5-1.4-1.4l5-5c.4-.4 1-.4 1.4 0l5 5c.6.6.2 1.7-.7 1.7zm0-11H3C1.7 2 1.7 4 3 4h10c1.3 0 1.3-2 0-2z"/></svg>

After

Width:  |  Height:  |  Size: 255 B

@ -0,0 +1,2 @@
<svg xmlns="http://www.w3.org/2000/svg" height="16" width="16"
fill="rgba(255,255,255,1)"><path d="M15 3.7V13c0 1.5-1.53 3-3 3H7.13c-.72 0-1.63-.5-2.13-1l-5-5s.84-1 .87-1c.13-.1.33-.2.53-.2.1 0 .3.1.4.2L4 10.6V2.7c0-.6.4-1 1-1s1 .4 1 1v4.6h1V1c0-.6.4-1 1-1s1 .4 1 1v6.3h1V1.7c0-.6.4-1 1-1s1 .4 1 1v5.7h1V3.7c0-.6.4-1 1-1s1 .4 1 1z"/></svg>

After

Width:  |  Height:  |  Size: 339 B

@ -0,0 +1,2 @@
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16"
fill="rgba(255,255,255,1)"><path d="M8 10c-.3 0-.5-.1-.7-.3l-5-5c-.9-.9.5-2.3 1.4-1.4L8 7.6l4.3-4.3c.9-.9 2.3.5 1.4 1.4l-5 5c-.2.2-.4.3-.7.3zm5 2H3c-1.3 0-1.3 2 0 2h10c1.3 0 1.3-2 0-2z"/></svg>

After

Width:  |  Height:  |  Size: 256 B

@ -0,0 +1,2 @@
<svg xmlns="http://www.w3.org/2000/svg" height="16" width="16"
fill="rgba(255,255,255,1)"><path d="M1 1a1 1 0 011 1v2.4A7 7 0 118 15a7 7 0 01-4.9-2 1 1 0 011.4-1.5 5 5 0 10-1-5.5H6a1 1 0 010 2H1a1 1 0 01-1-1V2a1 1 0 011-1z"/></svg>

After

Width:  |  Height:  |  Size: 231 B

@ -0,0 +1,5 @@
<!-- This Source Code Form is subject to the terms of the Mozilla Public
- License, v. 2.0. If a copy of the MPL was not distributed with this
- file, You can obtain one at http://mozilla.org/MPL/2.0/. -->
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" viewBox="0 0 16 16"
fill="rgba(255,255,255,1)"><path d="M15 1a1 1 0 0 0-1 1v2.418A6.995 6.995 0 1 0 8 15a6.954 6.954 0 0 0 4.95-2.05 1 1 0 0 0-1.414-1.414A5.019 5.019 0 1 1 12.549 6H10a1 1 0 0 0 0 2h5a1 1 0 0 0 1-1V2a1 1 0 0 0-1-1z"></path></svg>

After

Width:  |  Height:  |  Size: 521 B

@ -0,0 +1,2 @@
<svg xmlns="http://www.w3.org/2000/svg" height="16" width="16"
fill="rgba(255,255,255,1)"><path d="M0 4h1.5c1 0 1.5.5 1.5 1.5v5c0 1-.5 1.5-1.5 1.5H0zM9.5 4c1 0 1.5.5 1.5 1.5v5c0 1-.5 1.5-1.5 1.5h-3c-1 0-1.5-.5-1.5-1.5v-5C5 4.5 5.5 4 6.5 4zM16 4h-1.5c-1 0-1.5.5-1.5 1.5v5c0 1 .5 1.5 1.5 1.5H16z"/></svg>

After

Width:  |  Height:  |  Size: 302 B

@ -1 +0,0 @@
<svg xmlns="http://www.w3.org/2000/svg" height="16" width="16"><path d="M9.5 4c1 0 1.5.5 1.5 1.5v5c0 1-.5 1.5-1.5 1.5h-3c-1 0-1.5-.5-1.5-1.5v-5C5 4.5 5.5 4 6.5 4z"/></svg>

Before

Width:  |  Height:  |  Size: 171 B

@ -0,0 +1,2 @@
<svg xmlns="http://www.w3.org/2000/svg" height="16" width="16"
fill="rgba(255,255,255,1)"><path d="M9.5 4c1 0 1.5.5 1.5 1.5v5c0 1-.5 1.5-1.5 1.5h-3c-1 0-1.5-.5-1.5-1.5v-5C5 4.5 5.5 4 6.5 4zM11 0v.5c0 1-.5 1.5-1.5 1.5h-3C5.5 2 5 1.5 5 .5V0h6zM11 16v-.5c0-1-.5-1.5-1.5-1.5h-3c-1 0-1.5.5-1.5 1.5v.5h6z"/></svg>

After

Width:  |  Height:  |  Size: 307 B

@ -0,0 +1,2 @@
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16"
fill="rgba(255,255,255,1)"><path d="M5.5 4c1 0 1.5.5 1.5 1.5v5c0 1-.5 1.5-1.5 1.5h-3c-1 0-1.5-.5-1.5-1.5v-5C1 4.5 1.5 4 2.5 4zM7 0v.5C7 1.5 6.5 2 5.5 2h-3C1.5 2 1 1.5 1 .5V0h6zM7 16v-.5c0-1-.5-1.5-1.5-1.5h-3c-1 0-1.5.5-1.5 1.5v.5h6zM13.5 4c1 0 1.5.5 1.5 1.5v5c0 1-.5 1.5-1.5 1.5h-3c-1 0-1.5-.5-1.5-1.5v-5c0-1 .5-1.5 1.5-1.5zM15 0v.5c0 1-.5 1.5-1.5 1.5h-3C9.5 2 9 1.5 9 .5V0h6zM15 16v-.507c0-1-.5-1.5-1.5-1.5h-3C9.5 14 9 14.5 9 15.5v.5h6z"/></svg>

After

Width:  |  Height:  |  Size: 509 B

@ -0,0 +1,5 @@
<!-- This Source Code Form is subject to the terms of the Mozilla Public
- License, v. 2.0. If a copy of the MPL was not distributed with this
- file, You can obtain one at http://mozilla.org/MPL/2.0/. -->
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" viewBox="0 0 16 16"
fill="rgba(255,255,255,1)"><path d="M12.408 8.217l-8.083-6.7A.2.2 0 0 0 4 1.672V12.3a.2.2 0 0 0 .333.146l2.56-2.372 1.857 3.9A1.125 1.125 0 1 0 10.782 13L8.913 9.075l3.4-.51a.2.2 0 0 0 .095-.348z"></path></svg>

After

Width:  |  Height:  |  Size: 505 B

@ -0,0 +1,2 @@
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16"
fill="rgba(255,255,255,1)"><path d="M1.5 3.5C.5 3.5 0 4 0 5v6.5c0 1 .5 1.5 1.5 1.5h4c1 0 1.5-.5 1.5-1.5V5c0-1-.5-1.5-1.5-1.5zm2 1.2c.8 0 1.4.2 1.8.6.5.4.7 1 .7 1.7 0 .5-.2 1-.5 1.4-.2.3-.5.7-1 1l-.6.4c-.4.3-.6.4-.75.56-.15.14-.25.24-.35.44H6v1.3H1c0-.6.1-1.1.3-1.5.3-.6.7-1 1.5-1.6.7-.4 1.1-.8 1.28-1 .32-.3.42-.6.42-1 0-.3-.1-.6-.23-.8-.17-.2-.37-.3-.77-.3s-.7.1-.9.5c-.04.2-.1.5-.1.9H1.1c0-.6.1-1.1.3-1.5.4-.7 1.1-1.1 2.1-1.1zM10.54 3.54C9.5 3.54 9 4 9 5v6.5c0 1 .5 1.5 1.54 1.5h4c.96 0 1.46-.5 1.46-1.5V5c0-1-.5-1.46-1.5-1.46zm1.9.95c.7 0 1.3.2 1.7.5.4.4.6.8.6 1.4 0 .4-.1.8-.4 1.1-.2.2-.3.3-.5.4.1 0 .3.1.6.3.4.3.5.8.5 1.4 0 .6-.2 1.2-.6 1.6-.4.5-1.1.7-1.9.7-1 0-1.8-.3-2.2-1-.14-.29-.24-.69-.24-1.29h1.4c0 .3 0 .5.1.7.2.4.5.5 1 .5.3 0 .5-.1.7-.3.2-.2.3-.5.3-.8 0-.5-.2-.8-.6-.95-.2-.05-.5-.15-1-.15v-1c.5 0 .8-.1 1-.14.3-.1.5-.4.5-.9 0-.3-.1-.5-.2-.7-.2-.2-.4-.3-.7-.3-.3 0-.6.1-.75.3-.2.2-.2.5-.2.86h-1.34c0-.4.1-.7.19-1.1 0-.12.2-.32.4-.62.2-.2.4-.3.7-.4.3-.1.6-.1 1-.1z"/></svg>

After

Width:  |  Height:  |  Size: 1.0 KiB

@ -0,0 +1,2 @@
<svg xmlns="http://www.w3.org/2000/svg" height="16" width="16"
fill="rgba(255,255,255,1)"><path d="M6 3c-1 0-1.5.5-1.5 1.5v7c0 1 .5 1.5 1.5 1.5h4c1 0 1.5-.5 1.5-1.5v-7c0-1-.5-1.5-1.5-1.5z"/></svg>

After

Width:  |  Height:  |  Size: 196 B

@ -0,0 +1,2 @@
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16"
fill="rgba(255,255,255,1)"><path d="M10.56 3.5C9.56 3.5 9 4 9 5v6.5c0 1 .5 1.5 1.5 1.5h4c1 0 1.5-.5 1.5-1.5V5c0-1-.5-1.5-1.5-1.5zm1.93 1.2c.8 0 1.4.2 1.8.64.5.4.7 1 .7 1.7 0 .5-.2 1-.5 1.44-.2.3-.6.6-1 .93l-.6.4c-.4.3-.6.4-.7.55-.1.1-.2.2-.3.4h3.2v1.27h-5c0-.5.1-1 .3-1.43.2-.49.7-1 1.5-1.54.7-.5 1.1-.8 1.3-1.02.3-.3.4-.7.4-1.05 0-.3-.1-.6-.3-.77-.2-.2-.4-.3-.7-.3-.4 0-.7.2-.9.5-.1.2-.1.5-.2.9h-1.4c0-.6.2-1.1.3-1.5.4-.7 1.1-1.1 2-1.1zM1.54 3.5C.54 3.5 0 4 0 5v6.5c0 1 .5 1.5 1.54 1.5h4c1 0 1.5-.5 1.5-1.5V5c0-1-.5-1.5-1.5-1.5zm1.8 1.125H4.5V12H3V6.9H1.3v-1c.5 0 .8 0 .97-.03.33-.07.53-.17.73-.37.1-.2.2-.3.25-.5.05-.2.05-.3.05-.3z"/></svg>

After

Width:  |  Height:  |  Size: 705 B

@ -0,0 +1,2 @@
<svg xmlns="http://www.w3.org/2000/svg" height="16" width="16"
fill="rgba(255,255,255,1)"><path d="M4 16V2s0-1 1-1h6s1 0 1 1v14l-4-5z"/></svg>

After

Width:  |  Height:  |  Size: 142 B

@ -1 +0,0 @@
<svg xmlns="http://www.w3.org/2000/svg" height="16" width="16"><path d="m14 9h-6c-1.3 0-1.3 2 0 2h6c1.3 0 1.3-2 0-2zm-5.2-8h-3.8c-1.3 0-1.3 2 0 2h1.7zm-6.8 0c-1 0-1.3 1-0.7 1.7 0.7 0.6 1.7 0.3 1.7-0.7 0-0.5-0.4-1-1-1zm3 8c-1 0-1.3 1-0.7 1.7 0.6 0.6 1.7 0.2 1.7-0.7 0-0.5-0.4-1-1-1zm0.3-4h-0.3c-1.4 0-1.4 2 0 2h2.3zm-3.3 0c-0.9 0-1.4 1-0.7 1.7 0.7 0.6 1.7 0.2 1.7-0.7 0-0.6-0.5-1-1-1zm12 8h-9c-1.3 0-1.3 2 0 2h9c1.3 0 1.3-2 0-2zm-12 0c-1 0-1.3 1-0.7 1.7 0.7 0.6 1.7 0.2 1.7-0.712 0-0.5-0.4-1-1-1z"/><path d="m7.37 4.838 3.93-3.911v2.138h3.629v3.546h-3.629v2.138l-3.93-3.911"/></svg>

Before

Width:  |  Height:  |  Size: 581 B

@ -0,0 +1,5 @@
<!-- This Source Code Form is subject to the terms of the Mozilla Public
- License, v. 2.0. If a copy of the MPL was not distributed with this
- file, You can obtain one at http://mozilla.org/MPL/2.0/. -->
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" viewBox="0 0 16 16"
fill="rgba(255,255,255,1)"><path d="M14 3h-2v2h2v8H2V5h7V3h-.849L6.584 1.538A2 2 0 0 0 5.219 1H2a2 2 0 0 0-2 2v10a2 2 0 0 0 2 2h12a2 2 0 0 0 2-2V5a2 2 0 0 0-2-2zM2 3h3.219l1.072 1H2z"></path><path d="M8.146 6.146a.5.5 0 0 0 0 .707l2 2a.5.5 0 0 0 .707 0l2-2a.5.5 0 1 0-.707-.707L11 7.293V.5a.5.5 0 0 0-1 0v6.793L8.854 6.146a.5.5 0 0 0-.708 0z"></path></svg>

After

Width:  |  Height:  |  Size: 651 B

@ -1,24 +0,0 @@
<?xml version="1.0" encoding="iso-8859-1"?>
<!-- copied from https://www.svgrepo.com/svg/255881/text -->
<svg version="1.1" id="Layer_1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" x="0px" y="0px"
viewBox="0 0 16 16" style="enable-background:new 0 0 16 16;" xml:space="preserve">
<g>
<g transform="scale(0.03125)">
<path d="M405.787,43.574H8.17c-4.513,0-8.17,3.658-8.17,8.17v119.83c0,4.512,3.657,8.17,8.17,8.17h32.681
c4.513,0,8.17-3.658,8.17-8.17v-24.511h95.319v119.83c0,4.512,3.657,8.17,8.17,8.17c4.513,0,8.17-3.658,8.17-8.17v-128
c0-4.512-3.657-8.17-8.17-8.17H40.851c-4.513,0-8.17,3.658-8.17,8.17v24.511H16.34V59.915h381.277v103.489h-16.34v-24.511
c0-4.512-3.657-8.17-8.17-8.17h-111.66c-4.513,0-8.17,3.658-8.17,8.17v288.681c0,4.512,3.657,8.17,8.17,8.17h57.191v16.34H95.319
v-16.34h57.191c4.513,0,8.17-3.658,8.17-8.17v-128c0-4.512-3.657-8.17-8.17-8.17c-4.513,0-8.17,3.658-8.17,8.17v119.83H87.149
c-4.513,0-8.17,3.658-8.17,8.17v32.681c0,4.512,3.657,8.17,8.17,8.17h239.66c4.513,0,8.17-3.658,8.17-8.17v-32.681
c0-4.512-3.657-8.17-8.17-8.17h-57.192v-272.34h95.319v24.511c0,4.512,3.657,8.17,8.17,8.17h32.681c4.513,0,8.17-3.658,8.17-8.17
V51.745C413.957,47.233,410.3,43.574,405.787,43.574z"/>
</g>
</g>
<g>
<g transform="scale(0.03125)">
<path d="M503.83,452.085h-24.511V59.915h24.511c4.513,0,8.17-3.658,8.17-8.17s-3.657-8.17-8.17-8.17h-65.362
c-4.513,0-8.17,3.658-8.17,8.17s3.657,8.17,8.17,8.17h24.511v392.17h-24.511c-4.513,0-8.17,3.658-8.17,8.17s3.657,8.17,8.17,8.17
h65.362c4.513,0,8.17-3.658,8.17-8.17S508.343,452.085,503.83,452.085z"/>
</g>
</g>
</svg>

Before

Width:  |  Height:  |  Size: 1.6 KiB

@ -1,9 +0,0 @@
<?xml version='1.0' encoding='utf-8'?>
<!DOCTYPE svg PUBLIC '-//W3C//DTD SVG 1.1//EN' 'http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd'>
<svg version="1.1" xmlns="http://www.w3.org/2000/svg" width="16" height="16" viewBox="0 0 16 16" xmlns:xlink="http://www.w3.org/1999/xlink" enable-background="new 0 0 16 16">
<g>
<g transform="scale(0.03125)">
<path d="m455.1,137.9l-32.4,32.4-81-81.1 32.4-32.4c6.6-6.6 18.1-6.6 24.7,0l56.3,56.4c6.8,6.8 6.8,17.9 0,24.7zm-270.7,271l-81-81.1 209.4-209.7 81,81.1-209.4,209.7zm-99.7-42l60.6,60.7-84.4,23.8 23.8-84.5zm399.3-282.6l-56.3-56.4c-11-11-50.7-31.8-82.4,0l-285.3,285.5c-2.5,2.5-4.3,5.5-5.2,8.9l-43,153.1c-2,7.1 0.1,14.7 5.2,20 5.2,5.3 15.6,6.2 20,5.2l153-43.1c3.4-0.9 6.4-2.7 8.9-5.2l285.1-285.5c22.7-22.7 22.7-59.7 0-82.5z"/>
</g>
</g>
</svg>

Before

Width:  |  Height:  |  Size: 804 B

@ -0,0 +1 @@
<svg width="16" height="16" xmlns="http://www.w3.org/2000/svg" fill="rgba(255,255,255,1)"><path d="M8 11a1 1 0 01-.707-.293l-2.99-2.99c-.91-.942.471-2.324 1.414-1.414L8 8.586l2.283-2.283c.943-.91 2.324.472 1.414 1.414l-2.99 2.99A1 1 0 018 11z"/></svg>

After

Width:  |  Height:  |  Size: 251 B

@ -0,0 +1,5 @@
<!-- This Source Code Form is subject to the terms of the Mozilla Public
- License, v. 2.0. If a copy of the MPL was not distributed with this
- file, You can obtain one at http://mozilla.org/MPL/2.0/. -->
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" viewBox="0 0 16 16"
fill="rgba(255,255,255,1)"><path d="M14.859 3.2a1.335 1.335 0 0 1-1.217.8H13v1h1v8H2V5h8V4h-.642a1.365 1.365 0 0 1-1.325-1.11L6.584 1.538A2 2 0 0 0 5.219 1H2a2 2 0 0 0-2 2v10a2 2 0 0 0 2 2h12a2 2 0 0 0 2-2V5a2 2 0 0 0-1.141-1.8zM2 3h3.219l1.072 1H2zm7.854-.146L11 1.707V8.5a.5.5 0 0 0 1 0V1.707l1.146 1.146a.5.5 0 1 0 .707-.707l-2-2a.5.5 0 0 0-.707 0l-2 2a.5.5 0 0 0 .707.707z"></path></svg>

After

Width:  |  Height:  |  Size: 686 B

@ -0,0 +1,8 @@
<!-- This Source Code Form is subject to the terms of the Mozilla Public
- License, v. 2.0. If a copy of the MPL was not distributed with this
- file, You can obtain one at http://mozilla.org/MPL/2.0/. -->
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" viewBox="0 0 16
16"
fill="rgba(255,255,255,1)"><path transform='rotate(90) translate(0, -16)'
d="M15.707 7.293l-6-6a1 1 0 0 0-1.414 1.414L12.586 7H1a1 1 0 0 0 0 2h11.586l-4.293
4.293a1 1 0 1 0 1.414 1.414l6-6a1 1 0 0 0 0-1.414z"></path></svg>

After

Width:  |  Height:  |  Size: 517 B

@ -0,0 +1,13 @@
<!-- This Source Code Form is subject to the terms of the Mozilla Public
- License, v. 2.0. If a copy of the MPL was not distributed with this
- file, You can obtain one at http://mozilla.org/MPL/2.0/. -->
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" viewBox="0 0 16
16"
fill="rgba(255,255,255,1)">
<path
transform='rotate(90) translate(0, -16)'
d="M15 7H3.414l4.293-4.293a1 1 0 0
0-1.414-1.414l-6 6a1 1 0 0 0 0 1.414l6 6a1 1 0 0 0 1.414-1.414L3.414 9H15a1 1 0 0
0 0-2z">
</path>
</svg>

After

Width:  |  Height:  |  Size: 517 B

Some files were not shown because too many files have changed in this diff Show More

Loading…
Cancel
Save