1037 Luoyu Road,
Huazhong University of Science and Technology (HUST),
Wuhan, China, 430074
Email: hyzhou@hust.edu.cn
Biography
I will start my postdoctoral research fellow at National University of Singapore (NUS), working closely with Prof. Gim Hee Lee.
I got the Ph.D. degree at Huazhong University of Science and Technology (HUST) in 2024, advised by Prof. Luxin Yan. Before that, I got the B.Eng. degree at Central South University (CSU) in 2019.
I am now working on motion perception and 3D vision in adverse environments, if you have an excellent project for collaboration, please email me!
2024.10, I pass my Ph.D. defense, and become a doctor of engineering.
2024.09, Our adverse weather optical flow paper is accepted to TPAMI'24.
2024.06, We have won 1st place in the track 'Text Recognition through Atmospheric Turbulence' in the CVPR'24 7th UG2+ Challenge.
2024.06, We have won 1st place in the track 'Coded Target Restoration through Atmospheric Turbulence' in the CVPR'24 7th UG2+ Challenge.
2024.02, Our VisMoFlow scene flow based on multimodal fusion is accepted to CVPR'24.
2024.02, Our NER-Net nighttime event reconstruction is accepted to CVPR'24.
2024.01, Our JSTR event-based moving object detection method is accepted to ICRA'24.
2024.01, Our ABDA-Flow optical flow under nighttime scene paper is accepted to ICLR'24 (Spotlight).
2023.02, Our UCDA-Flow optical flow under foggy scene paper is accepted to CVPR'23.
2022.11, Our HMBA-FlowNet optical flow under adverse weather paper is accepted to AAAI'23.
2021.01, Our JRGR derain paper is accepted to CVPR'21.
Researches
Scene motion perception is still extremely challenging in adverse conditions, such as adverse weather and nighttime scene.
Motion estimation and motion segmentation are two typical tasks of scene motion perception. In adverse conditions,
degradation factors damage the discriminative visual feature, thus matching the invalid motion feature, limiting the performance of these tasks.
Dr. Zhou constructs a multimodal-based platform to collect data,
and designs efficient machine learning algorithms to train the state-of-the-art deep models,
thus achieving the goal of scene motion perception under adverse conditions. Specifically, his four representatives research projects are:
1. Constructing a multimodality perception system and a large-scale multimodal dataset.
Consisdering the scarcity of all-day and all weather motion datasets,
Dr. Zhou constructs a RGB-Event-LiDAR-IMU multimodality perception system with spatiotemporal alignment,
and builds a large-scale multimodal dataset under various time (e.g., daytime and nighttime) and various weather (e.g., rain, fog and snow).
The research outputs are comming soon.
2. Developing a general domain adaptation framework for 2D adverse optical flow.
Dr. Zhou formulates the adverse optical flow as a task of domain adaptation, and proposes a cumulative adaptation framework for adverse weather optical flow,
and a common space-guided domain adaptation framework for nighttime optical flow, thus transferring motion knowledge from clean domain to degraded domain.
The research outputs have been published in TPAMI 2024,
ICLR 2024,
CVPR 2023,
AAAI 2023.
3. Developing a novel multimodal fusion framework for 3D metric scene flow. To extend 2D relative optical flow to 3D metric scene flow,
Dr. Zhou further proposes a RGB-Event-LiDAR multimodal-based fusion framework,
which exploits the homogeneous nature between various modalities for complementary fusion,
achieving the accurate metric motion estimation in all-day and all-weather scenes.
The research outputs have been published in CVPR 2024.
4. Developing a event-based motion segmentation method for high-speed moving object.
Since traditional frame-based camera cannot fit the motion segmentation of high-speed moving object,
Dr. Zhou introduces event camera and Inertial Measuring Unit (IMU),
and proposes a Event-IMU based spatiotemporal reasoning method for moving object detection,
promoting the decoupling of high-speed independent moving object from background under ego-motion conditions.
The research outputs have been published in ICRA 2024.