The PTS received from MediaCodec are expressed relative to an arbitrary
clock origin. We consider the PTS of the first frame to be 0, and the
PTS of every other frame is relative to this first PTS (note that the
PTS is only used for recording, it is ignored for mirroring).
For simplicity, this relative PTS was computed on the server-side.
To prepare support for multiple stream (video and audio), send the
packet with its original PTS, and handle the PTS offset on the
client-side (by the recorder).
Since we can't know in advance which stream will produce the first
packet with the lowest PTS (a packet received later on one stream may
have a PTS lower than a packet received earlier on another stream),
computing the PTS on the server-side would require unnecessary waiting.
Prefix the name of threads by "scrcpy-". This improves readability in
the output of `top -H` for example.
Limit the thread names to 16 bytes, because it is limited on some
platforms.
From FFmpeg/doc/APIchanges:
2021-03-17 - f7db77bd87 - lavc 58.133.100 - codec.h
Deprecated av_init_packet(). Once removed, sizeof(AVPacket) will
no longer be a part of the public ABI.
Refs #2302 <https://github.com/Genymobile/scrcpy/issues/2302>
The fact that the recorder uses a separate thread is an internal detail,
so the functions _start(), _stop() and _join() should not be exposed.
Instead, start the thread on _open() and _stop()+_join() on close().
This paves the way to expose the recorder as a packet sink trait.
The functions SDL_malloc(), SDL_free() and SDL_strdup() were used only
because strdup() was not available everywhere.
Now that it is available, use the native version of these functions.
The header scrcpy.h is intended to be the "public" API. It should not
depend on other internal headers.
Therefore, declare all required structs in this header and adapt
internal code.
The record file was written from the stream thread. As a consequence,
any blocking I/O to write the file delayed the decoder.
For maximum performance even when recording is enabled, send
(refcounted) packets to a separate recording thread.
To packetize the H.264 raw stream, av_parser_parse2() (called by
av_read_frame()) knows that it has received a full frame only after it
has received some data for the next frame. As a consequence, the client
always waited until the next frame before sending the current frame to
the decoder!
On the device side, we know packets boundaries. To reduce latency,
make the device always transmit the "frame meta" to packetize the stream
manually (it was already implemented to send PTS, but only enabled on
recording).
On the client side, replace av_read_frame() by manual packetizing and
parsing.
<https://stackoverflow.com/questions/50682518/replacing-av-read-frame-to-reduce-delay>
<https://trac.ffmpeg.org/ticket/3354>
Limit source code to 80 chars, and declare functions return type and
modifiers on a separate line.
This allows to avoid very long lines, and all function names are
aligned.
(We do this on VLC, and I like it.)