Hanyu Zhou   周寒宇

Doctor

1037 Luoyu Road,
Huazhong University of Science and Technology (HUST),
Wuhan, China, 430074

Email: hyzhou@hust.edu.cn

Hanyu Zhou

Biography

I will start my postdoctoral research fellow at National University of Singapore (NUS), working closely with Prof. Gim Hee Lee. I got the Ph.D. degree at Huazhong University of Science and Technology (HUST) in 2024, advised by Prof. Luxin Yan. Before that, I got the B.Eng. degree at Central South University (CSU) in 2019. I am now working on motion perception and 3D vision in adverse environments, if you have an excellent project for collaboration, please email me!

Interests

Motion Estimation, Event Camera, Domain Adaptation, Multimodal Learning.

News

Researches

Scene motion perception is still extremely challenging in adverse conditions, such as adverse weather and nighttime scene. Motion estimation and motion segmentation are two typical tasks of scene motion perception. In adverse conditions, degradation factors damage the discriminative visual feature, thus matching the invalid motion feature, limiting the performance of these tasks. Dr. Zhou constructs a multimodal-based platform to collect data, and designs efficient machine learning algorithms to train the state-of-the-art deep models, thus achieving the goal of scene motion perception under adverse conditions. Specifically, his four representatives research projects are:

1. Constructing a multimodality perception system and a large-scale multimodal dataset. Consisdering the scarcity of all-day and all weather motion datasets, Dr. Zhou constructs a RGB-Event-LiDAR-IMU multimodality perception system with spatiotemporal alignment, and builds a large-scale multimodal dataset under various time (e.g., daytime and nighttime) and various weather (e.g., rain, fog and snow). The research outputs are comming soon.


2. Developing a general domain adaptation framework for 2D adverse optical flow. Dr. Zhou formulates the adverse optical flow as a task of domain adaptation, and proposes a cumulative adaptation framework for adverse weather optical flow, and a common space-guided domain adaptation framework for nighttime optical flow, thus transferring motion knowledge from clean domain to degraded domain. The research outputs have been published in TPAMI 2024, ICLR 2024, CVPR 2023, AAAI 2023.
3. Developing a novel multimodal fusion framework for 3D metric scene flow. To extend 2D relative optical flow to 3D metric scene flow, Dr. Zhou further proposes a RGB-Event-LiDAR multimodal-based fusion framework, which exploits the homogeneous nature between various modalities for complementary fusion, achieving the accurate metric motion estimation in all-day and all-weather scenes. The research outputs have been published in CVPR 2024.

4. Developing a event-based motion segmentation method for high-speed moving object. Since traditional frame-based camera cannot fit the motion segmentation of high-speed moving object, Dr. Zhou introduces event camera and Inertial Measuring Unit (IMU), and proposes a Event-IMU based spatiotemporal reasoning method for moving object detection, promoting the decoupling of high-speed independent moving object from background under ego-motion conditions. The research outputs have been published in ICRA 2024.


Publications (Google Scholar)

(#: Co-First Author; *: Corresponding Author)
Adverse Weather Optical Flow: Cumulative Homogeneous-Heterogeneous Adaptation.

Hanyu Zhou, Yi Chang, Zhiwei Shi, Wending Yan, Gang Chen, Yonghong Tian, Luxin Yan.

IEEE Transactions on Pattern Analysis and Machine Intelligence ( TPAMI ), 2024.

[PDF] [arXiv]

Exploring the Common Appearance-Boundary Adaptation for Nighttime Optical Flow.

Hanyu Zhou, Yi Chang, Haoyue Liu, Wending Yan, Yuxing Duan, Zhiwei Shi, Luxin Yan.

International Conference on Learning Representations ( ICLR ), Spotlight, 2024.

[PDF] [arXiv]

Bring Event into RGB and LiDAR: Hierarchical Visual-Motion Fusion for Scene Flow.

Hanyu Zhou, Yi Chang, Zhiwei Shi, Luxin Yan.

IEEE Conference on Computer Vision and Pattern Recognition ( CVPR ), 2024.

[PDF] [arXiv]

Seeing Motion During Nighttime with Event Camera.

Haoyue Liu, Shihan Peng, Lin Zhu, Yi Chang, Hanyu Zhou, Luxin Yan.

IEEE Conference on Computer Vision and Pattern Recognition ( CVPR ), 2024.

[PDF] [arXiv] [Code]

JSTR: Joint Spatio-Temporal Reasoning for Event-Based Moving Object Detection.

Hanyu Zhou, Zhiwei Shi, Hao Dong, Shihan Peng, Yi Chang, Luxin Yan.

IEEE International Conference on Robotics and Automation ( ICRA ), 2024.

[PDF] [arXiv]

Unsupervised Cumulative Domain Adaptation for Foggy Scene Optical Flow.

Hanyu Zhou, Yi Chang, Wending Yan, Luxin Yan.

IEEE Conference on Computer Vision and Pattern Recognition ( CVPR ), 2023.

[PDF] [arXiv] [Code]

Unsupervised Hierarchical Domain Adaptation for Adverse Weather Optical Flow.

Hanyu Zhou, Yi Chang, Gang Chen, Luxin Yan.

AAAI Conference on Artificial Intelligence ( AAAI ), 2023.

[PDF] [arXiv] [Code]

Closing the Loop: Joint Rain Generation and Removal via Disentangled Image Translation.

Yuntong Ye, Yi Chang, Hanyu Zhou, Luxin Yan.

IEEE Conference on Computer Vision and Pattern Recognition ( CVPR ), 2021.

[PDF] [arXiv]

Preprint Papers

CoSEC: A Coaxial Stereo Event Camera Dataset for Autonomous Driving.

Shihan Peng#, Hanyu Zhou#, Hao Dong, Zhiwei Shi, Haoyue Liu, Yuxing Duan, Yi Chang, Luxin Yan.

[arXiv]

NER-Net+: Seeing motion at nighttime with an event camera.

Haoyue Liu, Jinghan Xu, Shihan Peng, Yi Chang, Hanyu Zhou, Yuxing Duan, Luxin Yan.

IEEE Transactions on Pattern Analysis and Machine Intelligence ( TPAMI ), Under review, 2024.

Awards


  • 2024.06, 1st Place of "Atmospheric Turbulence Mitigation" in CVPR 2024 7th UG2+ Challenge.
  • 2023.11, Third Prize of "Robust Depth Estimation" in International Algorithm Case Competition.
  • 2019.09-2023.09, Ph.D. Scholarships.
  • 2018.04, Meritorious Winner in MCM/ICM.
  • Academic Services

    Journal Reviewers: TIP, TCSVT, TMM, MTAP.
    Conference Reviewers: CVPR'23-25, AAAI'23-25, ECCV'24, ICRA'24, ICLR'25.