Chen Change Loy

Nanyang Associate Professor, SCSE, NTU
Director, MMLab@NTU
Co-Associate Director, S-Lab

Blk N4-02c-108, Nanyang Avenue
School of Computer Science and Engineering
Nanyang Technological University
Singapore 639798

Chen Change Loy is a Nanyang Associate Professor with the School of Computer Science and Engineering, Nanyang Technological University, Singapore. He is also an Adjunct Associate Professor at The Chinese University of Hong Kong. He is the Lab Director of MMLab@NTU and Co-associate Director of S-Lab. He received his Ph.D. (2010) in Computer Science from the Queen Mary University of London. Prior to joining NTU, he served as a Research Assistant Professor at the MMLab of The Chinese University of Hong Kong, from 2013 to 2018.

His research interests include computer vision and deep learning with a focus on image/video restoration and enhancement, generative tasks, and representation learning. He serves as an Associate Editor of the International Journal of Computer Vision (IJCV) and IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI). He also serves/served as an Area Chair of top conferences such as ICCV 2021, CVPR 2021, CVPR 2019, and ECCV 2018. He is a senior member of IEEE.

Full Biography


MMLab@NTU was formed by Chen Change Loy on the 1 August 2018, with a research focus on computer vision and deep learning. Its sister lab is MMLab@CUHK. It is now a group with four faculty members and more than 35 members including research fellows, research assistants, and PhD students. Members in MMLab@NTU conduct research primarily in low-level vision, image and video understanding, creative content creation, 3D scene understanding and reconstruction.
Visit MMLab@NTU Homepage



Path-Restore: Learning Network Path Selection for Image Restoration
K. Yu, X. Wang, C. Dong, X. Tang, C. C. Loy
IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021 (TPAMI)
[DOI] [arXiv] [Project Page]

We observe that some corrupted image regions are inherently easier to restore than others since the distortion and content vary within an image. To this end, we propose Path-Restore, a multi-path CNN with a pathfinder that can dynamically select an appropriate route for each image region. We train the pathfinder using reinforcement learning with a difficulty-regulated reward. This reward is related to the performance, complexity and "the difficulty of restoring a region".

FASA: Feature Augmentation and Sampling Adaptation for Long-Tailed Instance Segmentation
Y. Zang, C. Huang, C. C. Loy
in Proceedings of IEEE/CVF International Conference on Computer Vision, 2021 (ICCV)
[PDF] [arXiv] [Supplementary Material] [Project Page]

We propose a simple yet effective method, Feature Augmentation and Sampling Adaptation (FASA), that addresses the data scarcity issue by augmenting the feature space especially for rare classes. FASA is a fast, generic method that can be easily plugged into standard or long-tailed segmentation frameworks, with consistent performance gains and little added cost.

ReconfigISP: Reconfigurable Camera Image Processing Pipeline
K. Yu, Z. Li, Y. Peng, C. C. Loy, J. Gu
in Proceedings of IEEE/CVF International Conference on Computer Vision, 2021 (ICCV)
[PDF] [arXiv] [Supplementary Material] [Project Page]

We propose a novel Reconfigurable ISP (ReconfigISP) whose architecture and parameters can be automatically tailored to specific data and tasks. In particular, we implement several ISP modules, and enable back-propagation for each module by training a differentiable proxy, hence allowing us to leverage the popular differentiable neural architecture search and effectively search for the optimal ISP architecture. A proxy tuning mechanism is adopted to maintain the accuracy of proxy networks in all cases.

Focal Frequency Loss for Image Reconstruction and Synthesis
L. Jiang, B. Dai, W. Wu, C. C. Loy
in Proceedings of IEEE/CVF International Conference on Computer Vision, 2021 (ICCV)
[PDF] [arXiv] [Project Page]

We show that narrowing gaps in the frequency domain can ameliorate image reconstruction and synthesis quality further. We propose a novel focal frequency loss, which allows a model to adaptively focus on frequency components that are hard to synthesize by down-weighting the easy ones. This objective function is complementary to existing spatial losses, offering great impedance against the loss of important frequency information due to the inherent bias of neural networks.

3D Human Texture Estimation from a Single Image with Transformers
X. Xu, C. C. Loy
in Proceedings of IEEE/CVF International Conference on Computer Vision, 2021 (ICCV, Oral)
[PDF] [arXiv] [Project Page]

Texformer estimates high-quality 3D human texture from a single image. The Transformer-based method allows efficient information interchange between the image space and UV texture space.

Talk-to-Edit: Fine-Grained Facial Editing via Dialog
Y. Jiang, Z. Huang, X. Pan, C. C. Loy, Z. Liu
in Proceedings of IEEE/CVF International Conference on Computer Vision, 2021 (ICCV)
[PDF] [arXiv] [Project Page]

We propose Talk-to-Edit, an interactive facial editing framework that performs fine-grained attribute manipulation through dialog between the user and the system. Unlike previous works that regard the editing as traversing straight lines in the latent space, here the fine-grained editing is formulated as finding a curving trajectory that respects fine-grained attribute landscape on the semantic field. 2) The curvature at each step is location-specific and determined by the input image as well as the users’ language requests.


Paper Links

ICCV 2021 - The 3rd Workshop on

Sensing, Understanding and Synthesizing Humans

In our workshop this year, we are organizing two exciting challenges.

In MVP Point Cloud Challenge, you can compete with others using your point cloud completion and registration methods based on the newly proposed MVP dataset, a high-quality multi-view partial point cloud dataset. It contains over 100,000 high-quality scans of partial 3D shapes rendered from 26 uniformly distributed camera poses for each 3D CAD model.

In ForgeryNet: Face Forgery Analysis Challenge, you will benchmark your anti-deepfake methods on the largest face forgery data ForgeryNet.

Both challenges have the deadline on September 19, 2021. Great prizes up for grabs!

I am looking for motivated PhD students, postdocs, research associates and project officers who have interests in image processing, computer vision and deep learning.