-
Notifications
You must be signed in to change notification settings - Fork 1.3k
Description
Currently, when screen sharing on iOS, the video frames are sent from the Broadcast Extension to the main app encoded in JPEG format. Additionally, frames are dropped (at least according to the example) and scaled down. All of this puts pressure on the device’s CPU and memory usage while decreasing the final video quality. In fact, the bottleneck—at least on older devices—seems to be the Unix socket: it simply cannot handle high throughput.
Luckily, there is a better approach that can produce fewer bytes per frame, allowing 60 fps even on older devices: using a hardware-accelerated video codec. I’ve made some changes to my local version of react-native-webrtc and observed a performance increase. However, there are several caveats in this implementation:
1. It uses third-party libraries to encode and decode the frames.
2. I couldn’t find good Objective-C libraries, so the part of the code that receives frames was rewritten in Swift. Fortunately, adding Swift to an Objective-C pod is quite simple.
3. The implementation is not compatible with the previous version of the Broadcast Extension, so it is a breaking change, and apps using it would have to be rewritten.
Hence the question: does it make sense to include these improvements here, considering all the drawbacks? If yes, what would be the best way to handle this in terms of development and release (e.g., adjusting the example app, updating documentation, or waiting for the next major version to be released)?