Internet video streaming has recently experienced tremendous growth, but delivery quality remains critically dependent on network bandwidth. To mitigate bandwidth limitations, most video is compressed, resulting in image artifacts, noise, and blur. Quality is also degraded by image upscaling, which is required to match the very high pixel density of mobile devices. Scientists have developed many upscaling techniques, such as Lanczos resampling, but for over 20 years, no fundamentally new methods were introduced. This is changing now thanks to a new class of techniques known as deep learning super-resolution (DLSR). Despite their excellent performance, DLSR methods cannot be easily applied to real-world applications due to their heavy computational requirements. In this talk, we present our accurate and lightweight network for video super-resolution.