You cannot select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
piper/README.md

180 lines
6.7 KiB
Markdown

![Piper logo](etc/logo.png)
1 year ago
1 year ago
A fast, local neural text to speech system that sounds great and is optimized for the Raspberry Pi 4.
12 months ago
Piper is used in a [variety of projects](#people-using-piper).
1 year ago
``` sh
echo 'Welcome to the world of speech synthesis!' | \
./piper --model en_US-lessac-medium.onnx --output_file welcome.wav
1 year ago
```
12 months ago
[Listen to voice samples](https://rhasspy.github.io/piper-samples) and check out a [video tutorial by Thorsten Müller](https://youtu.be/rjq5eZoWWSo)
1 year ago
Voices are trained with [VITS](https://github.com/jaywalnut310/vits/) and exported to the [onnxruntime](https://onnxruntime.ai/).
This is a project of the [Open Home Foundation](https://www.openhomefoundation.org/).
1 year ago
## Voices
Our goal is to support Home Assistant and the [Year of Voice](https://www.home-assistant.io/blog/2022/12/20/year-of-voice/).
2 months ago
[Download voices](VOICES.md) for the supported languages:
10 months ago
* Arabic (ar_JO)
10 months ago
* Catalan (ca_ES)
* Czech (cs_CZ)
10 months ago
* Danish (da_DK)
* German (de_DE)
* Greek (el_GR)
10 months ago
* English (en_GB, en_US)
* Spanish (es_ES, es_MX)
* Finnish (fi_FI)
* French (fr_FR)
* Hungarian (hu_HU)
10 months ago
* Icelandic (is_IS)
* Italian (it_IT)
* Georgian (ka_GE)
* Kazakh (kk_KZ)
* Luxembourgish (lb_LU)
10 months ago
* Nepali (ne_NP)
* Dutch (nl_BE, nl_NL)
* Norwegian (no_NO)
* Polish (pl_PL)
* Portuguese (pt_BR, pt_PT)
* Romanian (ro_RO)
10 months ago
* Russian (ru_RU)
* Serbian (sr_RS)
10 months ago
* Swedish (sv_SE)
* Swahili (sw_CD)
* Turkish (tr_TR)
10 months ago
* Ukrainian (uk_UA)
* Vietnamese (vi_VN)
* Chinese (zh_CN)
1 year ago
10 months ago
You will need two files per voice:
1. A `.onnx` model file, such as [`en_US-lessac-medium.onnx`](https://huggingface.co/rhasspy/piper-voices/resolve/v1.0.0/en/en_US/lessac/medium/en_US-lessac-medium.onnx)
2. A `.onnx.json` config file, such as [`en_US-lessac-medium.onnx.json`](https://huggingface.co/rhasspy/piper-voices/resolve/v1.0.0/en/en_US/lessac/medium/en_US-lessac-medium.onnx.json)
The `MODEL_CARD` file for each voice contains important licensing information. Piper is intended for text to speech research, and does not impose any additional restrictions on voice models. Some voices may have restrictive licenses, however, so please review them carefully!
1 year ago
## Installation
You can [run Piper with Python](#running-in-python) or download a binary release:
1 year ago
10 months ago
* [amd64](https://github.com/rhasspy/piper/releases/download/v1.2.0/piper_amd64.tar.gz) (64-bit desktop Linux)
* [arm64](https://github.com/rhasspy/piper/releases/download/v1.2.0/piper_arm64.tar.gz) (64-bit Raspberry Pi 4)
* [armv7](https://github.com/rhasspy/piper/releases/download/v1.2.0/piper_armv7.tar.gz) (32-bit Raspberry Pi 3/4)
11 months ago
If you want to build from source, see the [Makefile](Makefile) and [C++ source](src/cpp).
You must download and extract [piper-phonemize](https://github.com/rhasspy/piper-phonemize) to `lib/Linux-$(uname -m)/piper_phonemize` before building.
For example, `lib/Linux-x86_64/piper_phonemize/lib/libpiper_phonemize.so` should exist for AMD/Intel machines (as well as everything else from `libpiper_phonemize-amd64.tar.gz`).
1 year ago
## Usage
1. [Download a voice](#voices) and extract the `.onnx` and `.onnx.json` files
2. Run the `piper` binary with text on standard input, `--model /path/to/your-voice.onnx`, and `--output_file output.wav`
1 year ago
For example:
``` sh
echo 'Welcome to the world of speech synthesis!' | \
10 months ago
./piper --model en_US-lessac-medium.onnx --output_file welcome.wav
1 year ago
```
For multi-speaker models, use `--speaker <number>` to change speakers (default: 0).
See `piper --help` for more options.
1 year ago
### Streaming Audio
Piper can stream raw audio to stdout as its produced:
``` sh
echo 'This sentence is spoken first. This sentence is synthesized while the first sentence is spoken.' | \
./piper --model en_US-lessac-medium.onnx --output-raw | \
aplay -r 22050 -f S16_LE -t raw -
```
This is **raw** audio and not a WAV file, so make sure your audio player is set to play 16-bit mono PCM samples at the correct sample rate for the voice.
1 year ago
10 months ago
### JSON Input
The `piper` executable can accept JSON input when using the `--json-input` flag. Each line of input must be a JSON object with `text` field. For example:
``` json
{ "text": "First sentence to speak." }
{ "text": "Second sentence to speak." }
```
Optional fields include:
* `speaker` - string
* Name of the speaker to use from `speaker_id_map` in config (multi-speaker voices only)
* `speaker_id` - number
* Id of speaker to use from 0 to number of speakers - 1 (multi-speaker voices only, overrides "speaker")
* `output_file` - string
* Path to output WAV file
The following example writes two sentences with different speakers to different files:
``` json
{ "text": "First speaker.", "speaker_id": 0, "output_file": "/tmp/speaker_0.wav" }
{ "text": "Second speaker.", "speaker_id": 1, "output_file": "/tmp/speaker_1.wav" }
```
## People using Piper
Piper has been used in the following projects/papers:
* [Home Assistant](https://github.com/home-assistant/addons/blob/master/piper/README.md)
* [Rhasspy 3](https://github.com/rhasspy/rhasspy3/)
* [NVDA - NonVisual Desktop Access](https://www.nvaccess.org/post/in-process-8th-may-2023/#voices)
* [Image Captioning for the Visually Impaired and Blind: A Recipe for Low-Resource Languages](https://www.techrxiv.org/articles/preprint/Image_Captioning_for_the_Visually_Impaired_and_Blind_A_Recipe_for_Low-Resource_Languages/22133894)
* [Open Voice Operating System](https://github.com/OpenVoiceOS/ovos-tts-plugin-piper)
* [JetsonGPT](https://github.com/shahizat/jetsonGPT)
* [LocalAI](https://github.com/go-skynet/LocalAI)
* [Lernstick EDU / EXAM: reading clipboard content aloud with language detection](https://lernstick.ch/)
2 months ago
* [Natural Speech - A plugin for Runelite, an OSRS Client](https://github.com/phyce/rl-natural-speech)
* [mintPiper](https://github.com/evuraan/mintPiper)
1 year ago
## Training
11 months ago
See the [training guide](TRAINING.md) and the [source code](src/python).
1 year ago
Pretrained checkpoints are available on [Hugging Face](https://huggingface.co/datasets/rhasspy/piper-checkpoints/tree/main)
1 year ago
## Running in Python
See [src/python_run](src/python_run)
Install with `pip`:
``` sh
pip install piper-tts
```
and then run:
1 year ago
``` sh
echo 'Welcome to the world of speech synthesis!' | piper \
--model en_US-lessac-medium \
1 year ago
--output_file welcome.wav
```
This will automatically download [voice files](https://huggingface.co/rhasspy/piper-voices/tree/v1.0.0) the first time they're used. Use `--data-dir` and `--download-dir` to adjust where voices are found/downloaded.
1 year ago
If you'd like to use a GPU, install the `onnxruntime-gpu` package:
``` sh
.venv/bin/pip3 install onnxruntime-gpu
```
and then run `piper` with the `--cuda` argument. You will need to have a functioning CUDA environment, such as what's available in [NVIDIA's PyTorch containers](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch).
1 year ago