MMLab@NTU
Multimedia Laboratory @
Nanyang Technological University
Affiliated with S-Lab
About
MMLab@NTU
MMLab@NTU was formed on the 1 August 2018, with a research focus on computer vision and deep learning. Its sister lab is MMLab@CUHK. It is now a group with four faculty members and more than 40 members including research fellows, research assistants, and PhD students.
Members in MMLab@NTU conduct research primarily in low-level vision, image and video understanding, creative content creation, 3D scene understanding and reconstruction. Have a look at the overview of our research. All publications are listed here.
We are always looking for motivated PhD students, postdocs, research assistants who have the same interests like us. Check out the careers page and follow us on Twitter.
Welcome Xingang!
04/2023: Xingang Pan will join MMLab@NTU soon as an Assistant Professor .
CVPR 2023
03/2023: The team has a total of 14 papers (including 3 highlights and 1 award candidate) accepted to CVPR 2023.
ICLR 2023
01/2023: The team has a total of 5 papers (including 2 oral and 1 spotlight papers) accepted to ICLR 2023.
Pre-CVPR@NTU
05/2023: We are organizing one-day, complimentary, in-person seminar showcases presentations and discussions centered around papers accepted at CVPR 2023.
Check Out
News and Highlights
- 05/2023: Chongyi Li, Fangzhou Hong, Henghui Ding, Jiahao Xie and Shuai Yang are selected as outstanding reviewers of CVPR 2023. Congrats!
- 04/2023: Zhaoxi Chen is awarded Meta PhD Fellowship Finalist 2023. Congrats!
- 02/2023: Call for Papers: IJCV Special Issue on Mobile Intelligent Photography and Imaging. Full paper submission deadline is August 1st, 2023.
- 11/2022: Yuming Jiang and Jiawei Ren are awarded the very competitive and prestigious Google PhD Fellowship 2022 under the area “Machine Perception, Speech Technology and Computer Vision”. Congrats!
- 10/2022: Chongyi Li, Shuai Yang and Kaiyang Zhou are selected as outstanding reviewers of ECCV 2022. Congrats!
- 09/2022: Call for Papers: IJCV Special Issue on The Promises and Dangers of Large Vision Models. Full paper submission deadline is March 1st, 2023.
- 09/2022: We launch a new initiative, The AI Talks, inviting active researchers from all over the globe to share their latest research in AI, machine learning, computer vision etc. Subscribe the newsletter here.
- 07/2022: The team has a total of 18 papers (including 3 oral papers) accepted to ECCV 2022.
CVPR 2023
Second MIPI Workshop
Here comes the second workshop of Mobile Intelligent Photography and Imaging (MIPI) to be held in conjunction with CVPR 2023 (Sunday June 18). We organize several challenge tracks and also call for workshop papers.
- Paper submission deadline: Feb 12, 2023
- Challenge start date: Dec 25, 2022
- Challenge end date: Feb 20, 2023
Recent
Projects
OmniObject3D: Large-Vocabulary 3D Object Dataset for Realistic Perception, Reconstruction and Generation
T. Wu, J. Zhang, X. Fu, Y. Wang, J. Ren, L. Pan, W. Wu, L. Yang, J. Wang, C. Qian, D. Lin, Z. Liu
in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 2023 (CVPR, Award Candidate)
[PDF]
[arXiv]
[Supplementary Material]
[Project Page]
We propose OmniObject3D, a large vocabulary 3D object dataset with massive high-quality real-scanned 3D objects to facilitate the development of 3D perception, reconstruction, and generation in the real world. OmniObject3D comprises 6,000 scanned objects in 190 daily categories, sharing common classes with popular 2D datasets (e.g., ImageNet and LVIS), benefiting the pursuit of generalizable 3D representations.
F2-NeRF: Fast Neural Radiance Field Training with Free Camera Trajectories
P. Wang, Y. Liu, Z. Chen, L. Liu, Z. Liu, T. Komura, C. Theobalt, W. Wang
in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 2023 (CVPR, Highlight)
[PDF]
[arXiv]
[Supplementary Material]
[Project Page]
This paper presents a novel grid-based NeRF called F2-NeRF (Fast-Free-NeRF) for novel view synthesis, which enables arbitrary input camera trajectories and only costs a few minutes for training. Existing two widely-used space-warping methods are only designed for the forward-facing trajectory or the 360 degree object-centric trajectory but cannot process arbitrary trajectories. In this paper, we delve deep into the mechanism of space warping to handle unbounded scenes.
LaserMix for Semi-Supervised LiDAR Semantic Segmentation
L. Kong, J. Ren, L. Pan, Z. Liu
in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 2023 (CVPR, Highlight)
[PDF]
[arXiv]
[Supplementary Material]
[Project Page]
We study the underexplored semi-supervised learning problem in LiDAR segmentation. Our core idea is to leverage the strong spatial cues of LiDAR point clouds to better exploit unlabeled data. We propose LaserMix to mix laser beams from different LiDAR scans, and then encourage the model to make consistent and confident predictions before and after mixing.
Nighttime Smartphone Reflective Flare Removal using Optical Center Symmetry Prior
Y. Dai, Y. Luo, S. Zhou, C. Li, C. C. Loy
in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 2023 (CVPR, Highlight)
[PDF]
[arXiv]
[Supplementary Material]
[Project Page]
Reflective flare is a phenomenon that occurs when light reflects inside lenses, causing bright spots or a “ghosting effect” in photos. we propose an optical center symmetry prior, which suggests that the reflective flare and light source are always symmetrical around the lens’s optical center. This prior helps to locate the reflective flare’s proposal region more accurately and can be applied to most smartphone cameras.