Hardware Acceleration for VR Video Streaming (ART/253CP)

Hardware Acceleration for VR Video Streaming (ART/253CP)

  • Hardware Acceleration for VR Video Streaming (ART/253CP)
    ART/253CP
    Platform
    01 / 04 / 2018 - 31 / 12 / 2019
    15,104

    Dr Michelle Mi Suen LEE

    1) Description of VR Video Stream (VRVS) optimization algorithms 2) Performance simulation models in (a) Matlab and (b) C/C++. 3) System functional specification of VRVS FPGA board 4) Micro-architecture specification of VRVS functional modules 5) FPGA hardware implementation in hardware description language (Verilog) 6) Performance evaluation report 7) VR video live streaming prototype system consisting of: a) Panoramic video camera (third party/custom built) b) Video streamer (third party) c) VRVS-compatible video viewing apps for i) PC-based HMDs ii) Smartphone-based HMDs d) VRVS modules (i) VRVS algorithm implementation on PC (ii) VRVS FPGA prototyping board [2 sets] * Input: VR camera resolution: 7680×4320; frame rate: 30fps * Output: VR video streams for PC-based or smartphone-based HMDs with 2K or less display resolutions * Features: (1) Automatic video stabilization; (2) Automatic ROI detection of people and vehicle; (3) Adaptive view streaming * Performance: (a) Full frame streaming: 33% fewer pixels than equi-regular mapping (b) Adaptive view streaming: 75% fewer pixels than equi-regular mapping (8) Dual-lens 360° camera with IMU assisted video stabilization for waterborne activities (Deliverable for CS1) a) Design specification of "Dual-lens 360° cameras for waterborne activities" b) Dual-lens 360° camera for waterborne activities. Qty: 10 c) FPGA-based hardware board for IMU-assisted 360° video stabilization. Qty: 2 9) Software module for motion-based ROI detection for 360° video streaming (Deliverable for CS2)


    Despite the excitements over virtual reality (VR) and the availability of dozens of high resolutions 360° VR video cameras, consumer adoption of VR video has been very slow. One of the major obstacles is the distribution of VR content which requires much higher network bandwidth than that commercially available. In this project, novel algorithms and processing architecture will be developed to optimize VR video streams for efficient live streaming of VR video. In additional, machine learning will be applied to identify Region-of-Interest (ROI) to realize intelligent video streaming. The proposed algorithms will be implemented on FPGA platform, providing a hardware-accelerated solution for real-time optimization of VR video streams.
    This project aims to produce a real-time solution for optimization of 360° VR video for efficient streaming. It would facilitate the wide scale adoption of VR video, providing new growth opportunities for Hong Kong digital marketing and media production industry. Augmented with the automatic detection of ROI, it would provide an effective solution for utilizing 360° VR video in live monitoring of public spaces and critical infrastructure, enabling various smart city applications to secure and manage our city. In the longer term, it could enable immersive communication, allowing stronger connections between people at remote locations.