Hanyu Zhou   周寒宇

Ph.D. Student

1037 Luoyu Road,
Huazhong University of Science and Technology (HUST),
Wuhan, China, 430074

Email: hyzhou@hust.edu.cn

Hanyu Zhou

Biography

I am currently a Ph.D. student in Huazhong University of Science and Technology (HUST). Before that, I got the B.Eng. degree in Central South University (CSU) in 2019. I am now looking for a postdoctoral fellow position, if you have an excellent project or an open position that can match my research philosophy, please email me!

Interests

Motion Estimation, Event Camera, Domain Adaptation, Multimodal Learning.

News

Researches

Scene motion perception is still extremely challenging in adverse conditions, such as adverse weather and nighttime scene. Motion estimation and motion segmentation are two typical tasks of scene motion perception. In adverse conditions, degradation factors damage the discriminative visual feature, thus matching the invalid motion feature, limiting the performance of these tasks. Dr. Zhou constructs a multimodal-based platform to collect data, and designs efficient machine learning algorithms to train the state-of-the-art deep models, thus achieving the goal of scene motion perception under adverse conditions. Specifically, his four representatives research projects are:

1. Constructing a multimodality perception system and a large-scale multimodal dataset. Consisdering the scarcity of all-day and all weather motion datasets, Dr. Zhou constructs a RGB-Event-LiDAR-IMU multimodality perception system with spatiotemporal alignment, and builds a large-scale multimodal dataset under various time (e.g., daytime and nighttime) and various weather (e.g., rain, fog and snow). The research outputs are comming soon.


2. Developing a general domain adaptation framework for 2D adverse optical flow. Dr. Zhou formulates the adverse optical flow as a task of domain adaptation, and proposes a cumulative adaptation framework for adverse weather optical flow, and a common space-guided domain adaptation framework for nighttime optical flow, thus transferring motion knowledge from clean domain to degraded domain. The research outputs have been published in ICLR 2024, CVPR 2023, AAAI 2023.
3. Developing a novel multimodal fusion framework for 3D metric scene flow. To extend 2D relative optical flow to 3D metric scene flow, Dr. Zhou further proposes a RGB-Event-LiDAR multimodal-based fusion framework, which exploits the homogeneous nature between various modalities for complementary fusion, achieving the accurate metric motion estimation in all-day and all-weather scenes. The research outputs have been published in CVPR 2024.

4. Developing a event-based motion segmentation method for high-speed moving object. Since traditional frame-based camera cannot fit the motion segmentation of high-speed moving object, Dr. Zhou introduces event camera and Inertial Measuring Unit (IMU), and proposes a Event-IMU based spatiotemporal reasoning method for moving object detection, promoting the decoupling of high-speed independent moving object from background under ego-motion conditions. The research outputs have been published in ICRA 2024.


Publications (Google Scholar)

Exploring the Common Appearance-Boundary Adaptation for Nighttime Optical Flow.

Hanyu Zhou, Yi Chang, Haoyue Liu, Wending Yan, Yuxing Duan, Zhiwei Shi, Luxin Yan.

International Conference on Learning Representations ( ICLR ), Spotlight, 2024.

[PDF] [arXiv]

Bring Event into RGB and LiDAR: Hierarchical Visual-Motion Fusion for Scene Flow.

Hanyu Zhou, Yi Chang, Zhiwei Shi, Luxin Yan.

IEEE Conference on Computer Vision and Pattern Recognition ( CVPR ), 2024.

[arXiv]

Seeing Motion During Nighttime with Event Camera.

Haoyue Liu, Shihan Peng, Lin Zhu, Yi Chang, Hanyu Zhou, Luxin Yan.

IEEE Conference on Computer Vision and Pattern Recognition ( CVPR ), 2024.

[arXiv] [Code]

JSTR: Joint Spatio-Temporal Reasoning for Event-Based Moving Object Detection.

Hanyu Zhou, Zhiwei Shi, Hao Dong, Shihan Peng, Yi Chang, Luxin Yan.

IEEE International Conference on Robotics and Automation ( ICRA ), 2024.

[arXiv]

Unsupervised Cumulative Domain Adaptation for Foggy Scene Optical Flow.

Hanyu Zhou, Yi Chang, Wending Yan, Luxin Yan.

IEEE Conference on Computer Vision and Pattern Recognition ( CVPR ), 2023.

[PDF] [arXiv] [Code]

Unsupervised Hierarchical Domain Adaptation for Adverse Weather Optical Flow.

Hanyu Zhou, Yi Chang, Gang Chen, Luxin Yan.

AAAI Conference on Artificial Intelligence ( AAAI ), 2023.

[PDF] [arXiv] [Code]

Closing the Loop: Joint Rain Generation and Removal via Disentangled Image Translation.

Yuntong Ye, Yi Chang, Hanyu Zhou, Luxin Yan.

IEEE Conference on Computer Vision and Pattern Recognition ( CVPR ), 2021.

[PDF] [arXiv]

Preprint Papers

Adverse Weather Optical Flow: Cumulative Homogeneous-Heterogeneous Adaptation.

Hanyu Zhou, Yi Chang, Zhiwei Shi, Wending Yan, Gang Chen, Luxin Yan, Yonghong Tian.

IEEE Transactions on Pattern Analysis and Machine Intelligence ( TPAMI ), Under review, 2024.

Awards


  • 2023.11, Third Prize of "Robust Depth Estimation" in International Algorithm Case Competition.
  • 2019.09-2023.09, Ph.D. Scholarships.
  • 2018.04, Meritorious Winner in MCM/ICM.
  • Academic Services

    Journal Reviewers: TIP, TCSVT, MTAP.
    Conference Reviewers: CVPR'23-24, AAAI'23-24, ECCV'24, ICRA'24.