Some server parameters may depend on one another. For example,
audio_bit_rate is meaningless if audio is false.
But it is inconsistent to disable some parameters based on these
dependencies checks, but not others. Handling all dependencies between
parameters would add too much complexity for no benefit.
So just pass individual parameters independently.
PR #3978 <https://github.com/Genymobile/scrcpy/pull/3978>
By default, SDL creates an OpenGL 2.1 context on macOS for an OpenGL
renderer. As a consequence, mipmapping is not supported.
Force to use a core profile context, to get a higher version.
Before:
INFO: Renderer: opengl
INFO: OpenGL version: 2.1 NVIDIA-14.0.32 355.11.11.10.10.143
WARN: Trilinear filtering disabled (OpenGL 3.0+ or ES 2.0+ required)
After:
INFO: Renderer: opengl
DEBUG: Creating OpenGL Core Profile context
INFO: OpenGL version: 4.1 NVIDIA-14.0.32 355.11.11.10.10.143
INFO: Trilinear filtering enabled
when running with:
scrcpy --verbosity=debug --render-driver=opengl
Note: Since SDL_CreateRenderer() causes a fallback to OpenGL 2.1, the
profile and version attributes have to be set and the context created
_after_.
PR #3895 <https://github.com/Genymobile/scrcpy/pull/3895>
Signed-off-by: Romain Vimont <rom@rom1v.com>
If a line did not end with '\r', then the final `\n' was replaced by
'\0' for parsing the current line. This `\0` was then mistakenly
considered as the end of the whole "ip route" output, so the remaining
lines were not parsed, causing "scrcpy --tcpip" to fail in some cases.
To fix the issue, read the final character of the current line before it
is (possibly) overwritten by '\0'.
The slope encodes the drift between the device clock and the computer
clock. Its real value is expected very close to 1.
To estimate it, just assume it is exactly 1.
Since the clock is used to estimate very close points in the future, the
error caused by clock drift is totally negligible, and in practice it is
way lower than the slope estimation error.
Therefore, only estimate the offset.
On some systems, the SDL audio callback is not called frequently enough
(for example it requests 5ms of samples every 10ms), because the output
buffer is too small.
By default, we want to use a small value (5ms) to minimize latency and
buffer underrun, but if it does not work well, users need a way to
increase it.
Refs #3793 <https://github.com/Genymobile/scrcpy/issues/3793>
An int was compared with an unsigned:
../app/src/audio_player.c:290:27: warning: comparison of integers of
different signs: 'int' and 'unsigned int' [-Wsign-compare]
if (abs(diff) < ap->sample_rate / 1000) {
~~~~~~~~~ ^ ~~~~~~~~~~~~~~~~~~~~~~
In C, a label can only be followed by a statement, not a declaration.
An error in `app/src/screen.c` violated this, and led to a build error
with an error message similar to the one below:
../app/src/screen.c:821:13: error: expected expression
bool ok = sc_screen_init_size(screen);
^
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/lib/clang/13.0.0/include/stdbool.h:15:14: note: expanded from macro 'bool'
#define bool _Bool
^
../app/src/screen.c:822:18: error: use of undeclared identifier 'ok'
if (!ok) {
^
2 errors generated.
This could be fixed by introducing a new block (or compound statement;
as is already being done in the next `case`). That is a statement.
Fixes#3785 <https://github.com/Genymobile/scrcpy/issues/3785>
PR #3787 <https://github.com/Genymobile/scrcpy/pull/3787>
Signed-off-by: Ruoyu Zhong <zhongruoyu@outlook.com>
Signed-off-by: Romain Vimont <rom@rom1v.com>
On buffer underflow, the average buffering must be updated, but it is
intended to be accessed only from the receiver thread.
Make the player and the receiver thread communicate the underflow via a
new field (ap->underflow).
On initial connection, scrcpy sent some device metadata:
- the device name (to be used as window title)
- the initial video size (before any frame or even SPS/PPS)
But it is better to provide the initial video size as part as the video
stream, so that it can be demuxed and exposed via AVCodecContext to
sinks.
This avoids to pass an explicit "initial frame size" for the screen, the
recorder and the v4l2 sink.
Previously, the packet sink push() implementation just set the codec and
notified a wait condition. Then the recorder thread read the codec and
created the AVStream.
But this was racy: an AVFrame could be pushed before the creation of the
AVStream, causing its video_stream_index or audio_stream_index to be
initialized to -1.
Also, in the future, the AVStream initialization might need data
provided by the packet sink open(), so initialize it there (with a
mutex).
The sc_file_pusher is lazy-initialized, but it was stopped and joined in
all cases (accessing uninitialized values).
Detected by poisoning the struct scrcpy instance with ASAN enabled.
All server logs were printed to stdout, while all client logs were
printed to stderr.
Instead, use stderr for warnings and errors, stdout for the others:
- stdout: verbose, debug, info
- stderr: warn, error
Expose an option to add a buffering delay (in milliseconds) before
playing audio.
This is similar to the options --display-buffer and --v4l2-buffer for
video frames.
PR #3757 <https://github.com/Genymobile/scrcpy/pull/3757>
Play the decoded audio using SDL.
The audio player frame sink receives the audio frames, resample them
and write them to a byte buffer (introduced by this commit).
On SDL audio callback (from an internal SDL thread), copy samples from
this byte buffer to the SDL audio buffer.
The byte buffer is protected by the SDL_AudioDeviceLock(), but it has
been designed so that the producer and the consumer may write and read
in parallel, provided that they don't access the same slices of the
ring-buffer buffer.
PR #3757 <https://github.com/Genymobile/scrcpy/pull/3757>
Co-authored-by: Simon Chan <1330321+yume-chan@users.noreply.github.com>
A delay buffer delayed all the frames except the first one, to open the
scrcpy window immediately and get a picture.
Make this feature optional, so that the delay buffer might also be used
for audio (especially for simulating a high delay for debugging).
PR #3757 <https://github.com/Genymobile/scrcpy/pull/3757>
For clarity, the fields used only when a delay was set were wrapped in
an anonymous structure.
Now that the delay buffer has been extracted to a separate component,
the delay is necessarily set (it may not be 0), so the fields are always
used.
PR #3757 <https://github.com/Genymobile/scrcpy/pull/3757>
The components needing delayed frames (sc_screen and sc_v4l2_sink)
managed a sc_video_buffer instance, which itself embedded a
sc_frame_buffer instance (to keep only the most recent frame).
In theory, these components should not be aware of delaying: they should
just receive AVFrames later, and only handle a sc_frame_buffer.
Therefore, refactor sc_delay_buffer as a frame source (it consumes)
frames) and a frame sink (it produces frames, after some delay), and
plug an instance in the pipeline only when a delay is requested.
This also removes the need for a specific sc_video_buffer.
PR #3757 <https://github.com/Genymobile/scrcpy/pull/3757>
There was a frame sink trait, implemented by components able to receive
AVFrames, but each frame source had to manually send frame to sinks.
In order to mutualise sink management, add a frame sink trait.
There was a packet sink trait, implemented by components able to
receive AVPackets, but each packet source had to manually send packets
to sinks.
In order to mutualise sink management, add a packet source trait.
A video buffer had 2 responsibilities:
- handle the frame delaying mechanism (queuing packets and pushing them
after the expected delay);
- keep only the most recent frame (using a sc_frame_buffer).
In order to be able to reuse only the frame delaying mechanism, extract
it to a separate component, sc_delay_buffer.
The video_buffer thread clears the queue once it is stopped, but new
frames might still be pushed asynchronously.
To avoid the problem, do not push any frame once the video_buffer is
stopped.
The packets queued for buffering were wrapped in a dynamically allocated
structure with a "next" field.
To avoid this additional layer of allocation and indirection, use a
VecDeque.
The packets queued for recording were wrapped in a dynamically allocated
structure with a "next" field.
To avoid this additional layer of allocation and indirection, use a
VecDeque.
Since in scrcpy a video packet passed to avcodec_send_packet() is always
a complete video frame, it is sufficient to call avcodec_receive_frame()
exactly once.
In practice, it also works for audio packets: the decoder produces
exactly 1 frame for 1 input packet.
In theory, it is an implementation detail though, so
avcodec_receive_frame() should be called in a loop.
By default, scrcpy mirrors only the video when audio capture fails on
the device. Add an option to force scrcpy to fail if audio is enabled
but does not work.
PR #3757 <https://github.com/Genymobile/scrcpy/pull/3757>
If there is exactly one producer, then it can assume that the remaining
space in the buffer will only increase until it writes something.
This assumption may allow the producer to write to the buffer (up to a
known safe size) without any synchronization mechanism, thus allowing
to read and write different parts of the buffer in parallel.
The producer can then commit the write with a lock held, and update its
knowledge of the safe empty remaining space.
PR #3757 <https://github.com/Genymobile/scrcpy/pull/3757>
When audio capture fails on the device, scrcpy continues mirroring the
video stream. This allows to enable audio by default only when
supported.
However, if an audio configuration occurs (for example the user
explicitly selected an unknown audio encoder), this must be treated as
an error and scrcpy must exit.
PR #3757 <https://github.com/Genymobile/scrcpy/pull/3757>
If no bit-rate is passed, let the server use the default value (8Mbps).
This avoids to define a default value on both sides, and to pass the
default bit-rate as an argument when starting the server.
PR #3757 <https://github.com/Genymobile/scrcpy/pull/3757>
By default, audio is enabled (--no-audio must be explicitly passed to
disable it).
However, some devices may not support audio capture (typically devices
below Android 11, or Android 11 when the shell application is not
foreground on start).
In that case, make the server notify the client to dynamically disable
audio forwarding so that it does not wait indefinitely for an audio
stream.
Also disable audio on unknown codec or missing decoder on the
client-side, for the same reasons.
PR #3757 <https://github.com/Genymobile/scrcpy/pull/3757>
For video streams (at least H.264 and H.265), the config packet
containing SPS/PPS must be prepended to the next packet (the following
keyframe).
For audio streams (at least OPUS), they must not be merged.
PR #3757 <https://github.com/Genymobile/scrcpy/pull/3757>
The client does not use the audio stream if there is no display and no
recording (i.e. only V4L2), so disable audio so that the device does not
attempt to capture it.
PR #3757 <https://github.com/Genymobile/scrcpy/pull/3757>
When audio is enabled, open a new socket to send the audio stream from
the device to the client.
PR #3757 <https://github.com/Genymobile/scrcpy/pull/3757>
Co-authored-by: Romain Vimont <rom@rom1v.com>
Signed-off-by: Romain Vimont <rom@rom1v.com>
Audio will be enabled by default (when supported). Add an option to
disable it.
PR #3757 <https://github.com/Genymobile/scrcpy/pull/3757>
Co-authored-by: Romain Vimont <rom@rom1v.com>
Signed-off-by: Romain Vimont <rom@rom1v.com>
The recorder opened the target file from the packet sink open()
callback, called by the demuxer. Only then the recorder thread was
started.
One golden rule for the recorder is to never block the demuxer for I/O,
because it would impact mirroring. This rule is respected on recording
packets, but not for the initial recorder opening.
Therefore, start the recorder thread from sc_recorder_init(), open the
file immediately from the recorder thread, then make it wait for the
stream to start (on packet sink open()).
Now that the recorder can report errors directly (rather than making the
demuxer call fail), it is possible to report file opening error even
before the packet sink is open.
The recorder has two initialization phases: one to initialize the
concrete recorder object, and one to open its packet_sink trait.
Initialize mutex and condvar as part of the object initialization.
If there were several packet_sink traits (spoiler: one for video, one
for audio), then the mutex and condvar would still be initialized only
once.
Stop scrcpy on recorder errors.
It was previously indirectly stopped by the demuxer, which failed to
push packets to a recorder in error. Report it directly instead:
- it avoids to wait for the next demuxer call;
- it will allow to open the target file from a separate thread and stop
immediately on any I/O error.
Running scrcpy --tcpip on a device already connected via TCP/IP did not
initialize server->serial.
As a consequence, in debug mode, an assertion failed:
scrcpy: ../app/src/server.c:770: run_server: Assertion
`server->serial' failed.
In release mode, scrcpy failed with this error:
adb: -s requires an argument
Scrcpy does not use FFmpeg network features. Initialize network locally
instead (useful only for Windows).
The include block has been moved to fix the following warning:
Please include winsock2.h before windows.h
When a call to a packet or frame sink fails, do not log the error on the
caller side: either the "failure" is expected (explicitly stopped) or it
must be logged by the packet or frame sink implementation.
The PTS received from MediaCodec are expressed relative to an arbitrary
clock origin. We consider the PTS of the first frame to be 0, and the
PTS of every other frame is relative to this first PTS (note that the
PTS is only used for recording, it is ignored for mirroring).
For simplicity, this relative PTS was computed on the server-side.
To prepare support for multiple stream (video and audio), send the
packet with its original PTS, and handle the PTS offset on the
client-side (by the recorder).
Since we can't know in advance which stream will produce the first
packet with the lowest PTS (a packet received later on one stream may
have a PTS lower than a packet received earlier on another stream),
computing the PTS on the server-side would require unnecessary waiting.
On click event, only the whole buttons state was passed to the device.
In addition, on ACTION_DOWN and ACTION_UP, pass the button associated to
the action.
Refs #3635 <https://github.com/Genymobile/scrcpy/issues/3635>
Co-authored-by: Romain Vimont <rom@rom1v.com>
Signed-off-by: Romain Vimont <rom@rom1v.com>
For the initial connection between the device and the computer, an adb
tunnel is established (with "adb reverse" or "adb forward").
The device-side of the tunnel is a local socket having the hard-coded
name "scrcpy". This may cause issues when several scrcpy instances are
started in a few seconds for the same device, since they will try to
bind the same name.
To avoid conflicts, make the client generate a random UID, and append
this UID to the local socket name ("scrcpy_01234567").