The video buffer is now an internal detail of the screen component.
Since the screen is plugged to the decoder via the frame sink trait, the
decoder does not access to the video buffer anymore.
The video buffer took ownership of the producer frame (so that it could
swap frames quickly).
In order to support multiple sinks plugged to the decoder, the decoded
frame must not be consumed by the display video buffer.
Therefore, move the producer and consumer frames out of the video
buffer, and use FFmpeg AVFrame refcounting to share ownership while
avoiding copies.
A skipped frame is detected when the producer offers a frame while the
current pending frame has not been consumed.
However, the producer (in practice the decoder) is not interested in the
fact that a frame has been skipped, only the consumer (the renderer) is.
Therefore, notify frame skip via a consumer callback. This allows to
manage the skipped and rendered frames count at the same place, and
remove fps_counter from decoder.
Video buffer is a tool between a frame producer and a frame consumer.
For now, it is used between a decoder and a renderer, but in the future
another instance might be used to swscale decoded frames.
After the struct screen is initialized, the window, the renderer and the
texture are necessarily valid, so there is no need to check in
screen_destroy().
There were only two frames simultaneously:
- one used by the decoder;
- one used by the renderer.
When the decoder finished decoding a frame, it swapped it with the
rendering frame.
Adding a third frame provides several benefits:
- the decoder do not have to wait for the renderer to release the
mutex;
- it simplifies the video_buffer API;
- it makes the rendering frame valid until the next call to
video_buffer_take_rendering_frame(), which will be useful for
swscaling on window resize.
Touch events were HiDPI-scaled twice:
- once because the position (provided as floats between 0 and 1) were
converted in pixels using the drawable size (not the window size)
- once due to screen_convert_to_frame_coords()
One possible fix could be to compute the position in pixels from the
window size instead, but this would unnecessarily round the event
position to the nearest window coordinates (instead of drawable
coordinates).
Instead, expose two separate functions to convert to frame coordinates
from either window or drawable coordinates.
Fixes#1536 <https://github.com/Genymobile/scrcpy/issues/1536>
Refs #15 <https://github.com/Genymobile/scrcpy/issues/15>
Refs e40532a376
The header scrcpy.h is intended to be the "public" API. It should not
depend on other internal headers.
Therefore, declare all required structs in this header and adapt
internal code.
Trilinear filtering can currently only be enabled for OpenGL renderers.
Do not print a warning if the renderer is not OpenGL, as it can confuses
users, while nothing is wrong.
On macOS with renderer "metal", HiDPI scaling may be incorrect on
initialization when several displays are connected.
Resetting the window size fixes the problem.
Refs #15 <https://github.com/Genymobile/scrcpy/issues/15>
Position and scale the content "manually" instead of relying on the
renderer "logical size".
This avoids possible rounding differences between the computed window
size and the content size, causing one row or column of black pixels on
the bottom or on the right.
This also avoids HiDPI scale issues, by computing the scaling manually.
This will also enable to draw items at their expected size on the screen
(unscaled).
Fixes#15 <https://github.com/Genymobile/scrcpy/issues/15>
In maximized state (but not fullscreen), it was possible to resize to
fit the device screen (with Ctrl+x or double-clicking on black borders).
This caused problems on macOS with the "expand to fullscreen" feature,
which behaves like a fullscreen mode but is seen as maximized by SDL.
In that state, resizing to fit causes unexpected results.
To keep the behavior consistent on all platforms, just disable "resize
to fit" when the window is maximized.
On Windows, in maximized+fullscreen state, disabling fullscreen mode
unexpectedly triggers the "restored" then "maximized" events, leaving
the window in a weird state (maximized according to the events, but not
maximized visually).
Moreover, apply_pending_resize() asserts that fullscreen is disabled.
To avoid the issue, if fullscreen is set, just ignore the "restored"
event.
If the content size changes (due to rotation for example) while the
window is maximized or fullscreen, the resize must be applied once
fullscreen and maximized are disabled.
The previous strategy consisted in storing the windowed size, computing
the target size on rotation, and applying it on window restoration. But
tracking the windowed size (while ignoring the non-windowed size) was
tricky, due to unspecified order of SDL events (e.g. size changes can be
notified before "maximized" events), race conditions when reading window
flags, different behaviors on different platforms...
To simplify the whole resize management, store the old content size (the
frame size, possibly rotated) when it changes while the window is
maximized or fullscreen, so that the new optimal size can be computed on
window restoration.
The window dimensions are integers, so resizing to fit the content may
not be exact.
When computing the optimal size, it could cause to reduce alternatively
the width and height by few pixels, making the "optimal size" unstable.
To avoid this problem, check if the optimal size is already correct
either by keeping the width or the height.
Move the window-to-frame coordinates conversion from the input manager
to the screen.
This will allow to apply more screen-related transformations without
impacting the input manager.
Add Ctrl+Left and Ctrl+Right shortcuts to rotate the display (the
content of the scrcpy window).
Contrary to --lock-video-orientation, the rotation has no impact on
recording, and can be changed dynamically (and immediately).
Fixes#218 <https://github.com/Genymobile/scrcpy/issues/218>
Now, get_window_size() returns the current window size (fullscreen or
not), while get_windowed_window_size() always returned the windowed size
(the size when fullscreen is disabled).
The SDL video subsystem is not necessary if we don't display the video.
Move the sdl_init_and_configure() function from screen.c to scrcpy.c,
because it is not only related to the screen display.
Limit source code to 80 chars, and declare functions return type and
modifiers on a separate line.
This allows to avoid very long lines, and all function names are
aligned.
(We do this on VLC, and I like it.)
It is very convenient when I play mobile game and watch video at the
same time.
Tested on Linux mint Cinnamon as well as Windows 10.
Signed-off-by: Yu-Chen Lin <npes87184@gmail.com>
SDL_CreateTexture() is called both during initialization and on frame
size change.
To avoid inconsistent changes to arguments value, factorize them to a
single function create_texture().
The High DPI support is enabled by default, so that the renderer use the
full definition of High DPI screens.
However, there are still mouse coordinates problems on some MacOS having
High DPI support (but not all), so expose a way to disable it.
Use high DPI if available.
Note that on Mac OS X, setting this flag is not sufficient:
> On Apple's OS X you must set the NSHighResolutionCapable Info.plist
> property to YES, otherwise you will not receive a High DPI OpenGL
> display.
<https://wiki.libsdl.org/SDL_CreateWindow#flags>
screen_render() should not be called on initialization:
1. it is useless, since the window is hidden until the first frame;
2. it writes an empty texture (probably green) to the renderer.
The SDL clean up does not crash anymore on exit, probably since the
memory corruption caused by calling SDLNet_TCP_Close() too early has
been resolved.
On startup, the client has to:
1. listen on a port
2. push and start the server to the device
3. wait for the server to connect (accept)
4. read device name and size
5. initialize SDL
6. initialize the window and renderer
7. show the window
From the execution of the app_process command to start the server on the
device, to the execution of the java main method, it takes ~800ms. As a
consequence, step 3 also takes ~800ms on the client.
Once complete, the client initializes SDL, which takes ~500ms.
These two expensive actions are executed sequentially:
HOST DEVICE
listen on port | |
push/start the server |----------------->|| app_process loads the jar
accept the connection . ^ ||
. | ||
. | WASTE ||
. | OF ||
. | TIME ||
. | ||
. | ||
. v X execution of our java main
connection accepted |<-----------------| connect to the host
init SDL || |
|| ,----------------| send frames
|| |,---------------|
|| ||,--------------|
|| |||,-------------|
|| ||||,------------|
init window/renderer | |||||,-----------|
display frames |<++++++-----------|
(many frames skipped)
The rationale for step 3 occuring before step 5 is that initializing
SDL replaces the SIGTERM handler to receive the event in the event loop,
so pressing Ctrl+C during step 5 would not work (since it blocks the
event loop).
But this is not so important; let's parallelize the SDL initialization
with the app_process execution (we'll just add a timeout to the
connection):
HOST DEVICE
listen on port | |
push/start the server |----------------->||app_process loads the jar
init SDL || ||
|| ||
|| ||
|| ||
|| ||
|| ||
accept the connection . ||
. X execution of our java main
connection accepted |<-----------------| connect to the host
init window/renderer | |
display frames |<-----------------| send frames
|<-----------------|
In addition, show the window only once the first frame is available to
avoid flickering (opening a black window for 100~200ms).
Note: the window and renderer are initialized after the connection is
accepted because they use the device information received from the
device.
Replace screen_update() by a higher-level screen_update_frame() handling
the whole frame updating, so that scrcpy.c just call it without managing
implementation details.
The video screen size on the client may differ from the real device
screen size (e.g. the video stream may be scaled down). As a
consequence, mouse events must be scaled to match the real device
coordinates.
For this purpose, make the client send the video screen size along with
the absolute pointer location, and the server scale the location to
match the real device size before injecting mouse events.
To control the device from the computer:
- retrieve mouse and keyboard SDL events;
- convert them to Android events;
- serialize them;
- send them on the same socket used by the video stream (but in the
opposite direction);
- deserialize the events on the Android side;
- inject them using the InputManager.