Multimedia Laboratory @
Nanyang Technological University
Affiliated with S-Lab



MMLab@NTU was formed on the 1 August 2018, with a research focus on computer vision and deep learning. Its sister lab is MMLab@CUHK. It is now a group with four faculty members and more than 40 members including research fellows, research assistants, and PhD students.

Members in MMLab@NTU conduct research primarily in low-level vision, image and video understanding, creative content creation, 3D scene understanding and reconstruction. Have a look at the overview of our research. All publications are listed here.

We are always looking for motivated PhD students, postdocs, research assistants who have the same interests like us. Check out the careers page and follow us on Twitter.

NeurIPS 2023

09/2023: The team has a total of 14 papers (including four spotlight papers and two under the 'Datasets and Benchmarks' track) accepted to NeurIPS 2023.

View more

ICCV 2023

07/2023: The team has a total of 20 papers accepted to ICCV 2023.

View more

CVPR 2023

03/2023: The team has a total of 14 papers (including 3 highlights and 1 award candidate) accepted to CVPR 2023.

View more


05/2023: We are organizing one-day, complimentary, in-person seminar showcases presentations and discussions centered around papers accepted at CVPR 2023.

View more

Check Out

News and Highlights

View more

Call for Paper

IJCV Special Issue: Mobile Intelligent Photography and Imaging

The aim of this special issue is to offer a timely collection of the research achievements on the mobile intelligent and photography imaging. It is aligned with the remarkable leap of computational photography and imaging on mobile platforms. Details here.

  • Paper submission deadline: Aug 1, 2023 (deadline passed)



OmniObject3D: Large-Vocabulary 3D Object Dataset for Realistic Perception, Reconstruction and Generation
T. Wu, J. Zhang, X. Fu, Y. Wang, J. Ren, L. Pan, W. Wu, L. Yang, J. Wang, C. Qian, D. Lin, Z. Liu
in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 2023 (CVPR, Award Candidate)
[PDF] [arXiv] [Supplementary Material] [Project Page]

We propose OmniObject3D, a large vocabulary 3D object dataset with massive high-quality real-scanned 3D objects to facilitate the development of 3D perception, reconstruction, and generation in the real world. OmniObject3D comprises 6,000 scanned objects in 190 daily categories, sharing common classes with popular 2D datasets (e.g., ImageNet and LVIS), benefiting the pursuit of generalizable 3D representations.

F2-NeRF: Fast Neural Radiance Field Training with Free Camera Trajectories
P. Wang, Y. Liu, Z. Chen, L. Liu, Z. Liu, T. Komura, C. Theobalt, W. Wang
in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 2023 (CVPR, Highlight)
[PDF] [arXiv] [Supplementary Material] [Project Page]

This paper presents a novel grid-based NeRF called F2-NeRF (Fast-Free-NeRF) for novel view synthesis, which enables arbitrary input camera trajectories and only costs a few minutes for training. Existing two widely-used space-warping methods are only designed for the forward-facing trajectory or the 360 degree object-centric trajectory but cannot process arbitrary trajectories. In this paper, we delve deep into the mechanism of space warping to handle unbounded scenes.

LaserMix for Semi-Supervised LiDAR Semantic Segmentation
L. Kong, J. Ren, L. Pan, Z. Liu
in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 2023 (CVPR, Highlight)
[PDF] [arXiv] [Supplementary Material] [Project Page]

We study the underexplored semi-supervised learning problem in LiDAR segmentation. Our core idea is to leverage the strong spatial cues of LiDAR point clouds to better exploit unlabeled data. We propose LaserMix to mix laser beams from different LiDAR scans, and then encourage the model to make consistent and confident predictions before and after mixing.

Nighttime Smartphone Reflective Flare Removal using Optical Center Symmetry Prior
Y. Dai, Y. Luo, S. Zhou, C. Li, C. C. Loy
in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 2023 (CVPR, Highlight)
[PDF] [arXiv] [Supplementary Material] [Project Page]

Reflective flare is a phenomenon that occurs when light reflects inside lenses, causing bright spots or a “ghosting effect” in photos. we propose an optical center symmetry prior, which suggests that the reflective flare and light source are always symmetrical around the lens’s optical center. This prior helps to locate the reflective flare’s proposal region more accurately and can be applied to most smartphone cameras.