Neural-Enhanced Live Streaming: Improving Live Video Ingest via Online Learning
Jaehong Kim^, Youngmok Jung^,
Hyunho Yeo, Juncheol Ye, and Dongsu Han
In Proceedings of the Annual Conference of the ACM Special Interest Group on Data Communication on the Applications, Technologies, Architectures, and Protocols for Computer Communication Aug 2020
Live video accounts for a significant volume of today’s Internet video. Despite a large number of efforts to enhance user quality of experience (QoE) both at the ingest and distribution side of live video, the fundamental limitations are that streamer’s upstream bandwidth and computational capacity limit the quality of experience of thousands of viewers.To overcome this limitation, we design LiveNAS, a new live video ingest framework that enhances the origin stream’s quality by leveraging computation at ingest servers. Our ingest server applies neural super-resolution on the original stream, while imposing minimal overhead on ingest clients. LiveNAS employs online learning to maximize the quality gain and dynamically adjusts the resource use to the real-time quality improvement. LiveNAS delivers high-quality live streams up to 4K resolution, outperforming WebRTC by 1.96 dB on average in Peak-Signal-to-Noise-Ratio on real video streams and network traces, which leads to 12%-69% QoE improvement for live stream viewers.