2
Dr. Lu FANG Associate Professor, Smart Imaging Lab Tsinghua-Berkeley Shenzhen Institute Homepage: www.luvision.net Email: fanglu at sz.tsinghua.edu.cn Research interests: Computational Photography and Visual Computing Smart imaging Lab aims to integrate optics, quantum science, and information science together to explore new computational photography theories and key computational techniques to break through some deep-rooted assumptions and physical limitations in conventional imaging and sensing, such as rectilinear light propagation in geometrical optics, infinite light speed, diffraction limit in optical imaging systems. We study the intrinsic representation of cross- dimensional visual data and reveals the redundancy and sparsity in such visual big data. Project: Multi-Dimension Multi-Scale High-Resolution Computational Instrument Interdiscipline of Neural Science, Optics, Computational Imaging, Biomedical, Signal Processing Nowadays, the development of imaging tools becomes more and more crucial for biological study. For example, large-scale imaging of neural network activity in vivo, as one of the major goals of the BRAIN Initiative, is fundamental to neuroscience study. However, conventional microscopes fail in achieving both high optical resolution and large field-of-view (FoV) simultaneously. We broke up the bottlenecks by developing a novel video-rate, sub-gigapixel macroscope of centimeter scale FoV and sub-micron resolution, named as Real-time, Ultra-large-Scale imaging at High resolution (RUSH) macroscope. It is mainly composed of an objective of large FoV and high resolution, and a camera array for high throughput image sensing. To the best of our knowledge, RUSH is of the largest FoV, highest throughput macroscope at high resolution at the moment. With this novel macroscopy, we demonstrated various applications including high-content pathological slice screening, high-content drug screening, and large-scale imaging of neural network activity / neurovascular coupling / neuron-immune interaction in brain-cortex wide in vivo, etc. Project: Multiscale Camera Array for Gigapixel Videography Interdiscipline of Computational Imaging, Optical Design, Signal Processing, Computer Vision Traditionally, video systems have assumed that the resolution of the camera matches the resolution of the display, i.e., HD video uses HD cameras and displays, 4K video uses 4K cameras and displays, etc. The recent development of gigapixel and VR video systems has illustrated the potential and need for systems in which the camera captures substantially more image information than the display can show. These systems use tiled multiscale image structures to enable viewers to interactively explore the captured image stream. Size, weight, power and cost are central challenges in gigapixel video. To this end, we present an efficient method, in terms of budget, sensor bandwidth and set up labors, for gigapixel videography using a novel multiscale camera array. Our capture system owns a reference camera with a short-focus lens to capture a reference video with comparably large field-of-view, and a parallel of local-view cameras each with a long-focus lens to obtain high-definition localview videos. Such setting enables gigapixel videography by independently warping each local-view video to the reference video, and allows flexible, adaptive and moveable local-view camera setting.

Dr. Lu FANG - TBSI · Dr. Lu FANG Associate Professor, Smart Imaging Lab Tsinghua-Berkeley Shenzhen Institute ... Computational Photography and Visual Computing Smart imaging Lab

  • Upload
    others

  • View
    4

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Dr. Lu FANG - TBSI · Dr. Lu FANG Associate Professor, Smart Imaging Lab Tsinghua-Berkeley Shenzhen Institute ... Computational Photography and Visual Computing Smart imaging Lab

Dr. Lu FANG Associate Professor, Smart Imaging Lab Tsinghua-Berkeley Shenzhen Institute Homepage: www.luvision.net Email: fanglu at sz.tsinghua.edu.cn Research interests: Computational Photography and Visual Computing

Smart imaging Lab aims to integrate optics, quantum science, and information science together to explore new computational photography theories and key computational techniques to break through some deep-rooted assumptions and physical limitations in conventional imaging and sensing, such as rectilinear light propagation in geometrical optics, infinite light speed, diffraction limit in optical imaging systems. We study the intrinsic representation of cross-dimensional visual data and reveals the redundancy and sparsity in such visual big data.

Project: Multi-Dimension Multi-Scale High-Resolution Computational Instrument Interdiscipline of Neural Science, Optics, Computational Imaging, Biomedical, Signal Processing

Nowadays, the development of imaging tools becomes more and more crucial for biological study. For example, large-scale imaging of neural network activity in vivo, as one of the major goals of the BRAIN Initiative, is fundamental to neuroscience study. However, conventional microscopes fail in achieving both high optical resolution and large field-of-view (FoV) simultaneously. We broke up the bottlenecks by developing a novel video-rate, sub-gigapixel macroscope of centimeter scale FoV and sub-micron resolution, named as Real-time, Ultra-large-Scale imaging at High resolution (RUSH) macroscope. It is mainly composed of an objective of large FoV and high resolution, and a camera array for high throughput image sensing. To the best of our knowledge, RUSH is of the largest FoV, highest throughput macroscope at high resolution at the moment. With this novel macroscopy, we demonstrated various applications including high-content pathological slice screening, high-content drug screening, and large-scale imaging of neural network activity / neurovascular coupling / neuron-immune interaction in brain-cortex wide in vivo, etc.

Project: Multiscale Camera Array for Gigapixel Videography Interdiscipline of Computational Imaging, Optical Design, Signal Processing, Computer Vision

Traditionally, video systems have assumed that the resolution of the camera matches the resolution of the display, i.e., HD video uses HD cameras and displays, 4K video uses 4K cameras and displays, etc. The recent development of gigapixel and VR video systems has illustrated the potential and need for systems in which the camera captures substantially more image information than the display can show. These systems use tiled multiscale image structures to enable viewers to interactively explore the captured image stream. Size, weight, power and cost are central challenges in gigapixel video. To this end, we present an efficient method, in terms of budget, sensor bandwidth and set up labors, for gigapixel videography using a novel multiscale camera array. Our capture system owns a reference camera with a short-focus lens to capture a reference video with comparably large field-of-view, and a parallel of local-view cameras each with a long-focus lens to obtain high-definition localview videos. Such setting enables gigapixel videography by independently warping each local-view video to the reference video, and allows flexible, adaptive and moveable local-view camera setting.

Page 2: Dr. Lu FANG - TBSI · Dr. Lu FANG Associate Professor, Smart Imaging Lab Tsinghua-Berkeley Shenzhen Institute ... Computational Photography and Visual Computing Smart imaging Lab

Project: Multi-View Perception and Reconstruction Interdiscipline of Computer Vision, Compute Graphics, Robotics

Free-viewpoint video (FVV), providing users with superior fidelity and feelings of immersion in viewing visual media, has been extensively investigated recently. As an essential technique to ensure realistic free-viewpoint video, multi-view perception and reconstruction aim to acquire 3D model of real-world objects. The potential applications range from the construction of realistic object models for films, games, design engineering, to the quantitative recovery of metric information for scientific and engineering data analysis. Aiming at the practical usage of dense 3D reconstruction for both scene and human, we propose Flycap, FlyFusion and FlashFusion respectively. Flycap and FlyFusion enable intelligent viewpoint selection based on the immediate dynamic reconstruction result. The merit lies in its concurrent robustness, efficiency, and adaptation in producing fused and denoised 3D geometries and motions of a moving target interacting with different non-rigid objects in a large space. FlashFusion succeeds to enable realtime, globally consistent, high-resolution (5mm), and large-scale dense 3D reconstruction using highly-constrained computation, i.e., the CPU computing on portable device

Selected Publications

Multiscale Gigapixel Video: A Cross Resolution Image Matching and Warping Approach

Int. Conf. on Computational Photography (ICCP) 2017

MILD: Multi-Index hashing for appearance based Loop closure Detection, IEEE Int. Conf. on Multimedia & Expo. (ICME) 2017 Best Student Paper Award

Beyond SIFT using Binary Features in Loop Closure Detection, IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS) 2017

FlyCap: Markerless Motion Capture Using Multiple Autonomous Flying Camera

IEEE Trans. on Visualization and Computer Graphics, 2017

SurfaceNet: An End-to-end 3D Neural Network for Multiview Stereopsis

Int. Conf. on Computer Vision (ICCV) 2017