Sanqing Qu (瞿三清)

Hi there! Welcome to my homepage. I am currently a Ph.D. student in Intelligent Sensing, Perception and Computing (ISPC) Gruop led by Prof. Guang Chen at Tongji University, Shanghai, China. Before that, I received my bachelor degree of Automotive Engineering at Tongji University in 2020.

My research interests include autonomous driving, transfer learning, and video analysis.

Email  /  CV  /  Google Scholar  /  Github

profile photo
News

  • 2024.03 : Our work (GLC++), a substantial extension to GLC, is released!
  • 2024.02 : Our work (LEAD) on source-free universal domain adaptation is accepted by CVPR-2024!
  • 2024.02 : Our work (MAP) on source-free model intellectual property protection is accepted by CVPR-2024!
  • 2023.02 : Our work (GLC) on source-free universal domain adaptation is accepted by CVPR-2023!
  • 2023.02 : Our work (MAD) on single-domain generation is accepted by CVPR-2023!
  • 2022.07 : Our work (BMD) on source-free domain adpatation is accepted by ECCV-2022!
  • 2022.03 : One paper is accepted by IEEE T-Cyber (IF=19.118)!
  • 2021.03 : Our work (ACM-Net) on weakly-supervised temporal action localization is released!
  • Education
  • 2020.09 ~ Present : Ph.D. student in Automotive Engineering, Tongji University.
  • 2015.09 ~ 2020.07: Undergrad student in Automotive Engineering, Tongji University.
  • Selected Publications

    * indicates equal contribution

    dise GLC++: Source-Free Universal Domain Adaptation through Global-Local Clustering and Contrastive Affinity Learning
    Sanqing Qu, Tianpei Zou, Florian Röhrbein, Cewu Lu, Guang Chen, Dacheng Tao , Changjun Jiang
    Arxiv Pre-print, 2024
    [arXiv] [Code]

    Despite the simple global and local clustering (GLC) technique achieving commendable performance in separating "known" and "unknown" data, its reliance on pseudo-labeling supervision, especially using uniform encoding for all "unknown" data limits its capacity to discriminate among different "unknown" categories. To alleviate this, we promote GLC to GLC++ by developing a new contrastive affinity learning strategy, sidestepping the need for a specialized source model structure. Remarkably, in the most challenging open-partial-set scenarios on VisDA, GLC++ boosts up the H-score from 73.1% to 75.0%. GLC++ enhances the novel category clustering accuracy of GLC by 4.3% in open-set scenarios on Office-Home. Furthermore, the introduced contrastive learning strategy not only enhances GLC but also significantly facilitates existing methodologies, e.g., OVANet and UMAD.

    dise LEAD: Learning Decomposition for Source-free Universal Domain Adaptation
    Sanqing Qu, Tianpei Zou, Lianghua He, Florian Röhrbein, Alois Knoll, Guang Chen, Changjun Jiang
    IEEE / CVF Computer Vision and Pattern Recognition Conference (CVPR), 2024
    [arXiv] [Code]

    Universal Domain Adaptation (UniDA) targets knowledge transfer in the presence of both covariate and label shifts. Recently, Source-free Universal Domain Adaptation (SF-UniDA) has emerged to achieve UniDA without access to source data, which tends to be more practical due to data protection policies. The main challenge lies in determining whether covariate-shifted samples belong to target-private unknown categories. Existing methods tackle this either through hand-crafted thresholding or by developing time consuming iterative clustering strategies. In this paper, we propose a new idea of LEArning Decomposition (LEAD), which decouples features into source-known and -unknown components to identify target-private data. This solution leads to elegant views for identifying target-private unknown data without tedious tuning thresholds or relying on iterative unstable clustering. Remarkably, in the OPDA scenario on VisDA, LEAD attains an H-score of 76.6%, surpassing our GLC by 3.5%. Besides, LEAD is complementary to most existing SF-UniDA methods. For instance, in the OPDA scenario on Office-Home, LEAD advances UMAD H-score improvement from 70.1% to 78.0%.

    dise MAP: MAsk-Pruning for Source-Free Model Intellectual Property Protection
    Boyang Peng*, Sanqing Qu*, Yong Wu, Tianpei Zou, Lianghua He, Alois Knoll, Guang Chen, Changjun Jiang
    IEEE / CVF Computer Vision and Pattern Recognition Conference (CVPR), 2024
    [arXiv] [Code]

    Deep learning has achieved remarkable progress in various applications, heightening the importance of safeguarding the intellectual property (IP) of well-trained models. It entails not only authorizing usage but also ensuring the deployment of models in authorized data domains, i.e., making models exclusive to certain target domains. Previous methods necessitate concurrent access to source training data and target unauthorized data when performing IP protection, making them risky and inefficient for decentralized private data. In this paper, we target a practical setting where only a well-trained source model is available and investigate how we can realize IP protection. To achieve this, we propose a novel MAsk Pruning (MAP) framework. MAP stems from an intuitive hypothesis, i.e., there are target-related parameters in a well-trained model, locating and pruning them is the key to IP protection.

    dise Upcycling Models under Domain and Category Shift
    Sanqing Qu*, Tianpei Zou*, Florian Röhrbein, Cewu Lu, Guang Chen, Dacheng Tao , Changjun Jiang
    IEEE / CVF Computer Vision and Pattern Recognition Conference (CVPR), 2023
    [arXiv] [PDF] [Code] [Slides] [Poster] [Video]

    Deep neural networks (DNNs) often perform poorly in the presence of domain shift and category shift. To address this, in this paper, we explore the Source-free Universal Domain Adaptation (SF-UniDA). SF-UniDA is appealing in view that universal model adaptation can be resolved only on the basis of a standard pre-trained closed-set model, i.e., without source raw data and dedicated model architecture. To achieve this, we develop a generic global and local clustering technique (GLC). GLC equips with an inovative one-vs-all global pseudo-labeling strategy to realize "known" and "unknown" data samples separation under various category-shift. Remarkably, in the most challenging open-partial-set DA scenario, GLC outperforms UMAD by 14.8% on the VisDA benchmark.

    dise Modality-Agnostic Debiasing for Single Domain Generalization
    Sanqing Qu, Yingwei Pan, Guang Chen, Ting Yao, Changjun Jiang, Tao Mei
    IEEE / CVF Computer Vision and Pattern Recognition Conference (CVPR), 2023
    [arXiv] [PDF] [Slides] [Poster] [Video]

    Existing single-DG techniques commonly devise various data-augmentation algorithms, and remould the multi-source domain generalization methodology to learn domain-generalized (semantic) features. Nevertheless, these methods are typically modality-specific, thereby being only applicable to one single modality (e.g.,image). In contrast, we target a versatile Modality-Agnostic Debiasing (MAD) framework for single-DG, that enables generalization for different modalities. We have evaluated the effectiveness and superiority of MAD for single-DG via various empirical evidences on a series of tasks, including recognition on 1D texts, 2D images, 3D point clouds, and semantic segmentation on 2D images.

    dise BMD: A General Class-balanced Multicentric Dynamic Prototype Strategy for Source-free Domain Adaptation
    Sanqing Qu, Guang Chen, Jing Zhang, Zhijun Li , Wei He, Dacheng Tao
    European Conference on Computer Vision (ECCV), 2022
    [arXiv] [Code] [Video]

    In this paper, we design a general prototype based pseudo-labeling strategy. It is model-agnostic and can be applied to existing self-training based SFDA methods.

    dise Neuromorphic Vision-based Fall Localization in Event Streams with Temporal–spatial Attention Weighted Network
    Guang Chen*, Sanqing Qu*, Zhijun Li , Haitao Zhu, Jiaxuan Dong, Min Liu, Jorg Conradt.
    IEEE Transactions on Cybernetics. (T-Cyber), 2022
    [IEEE]  

    In this paper, we proposed a bio-inspired event-camera based falls temporal localization framework. Specifically, we propose a event density-based action proposal generation scheme, and introduce a temporal-spatial attention mechanism for action modeling.

    dise ACM-Net: Action Context Modeling Network for Weakly-supervised Temporal Action Localization
    Sanqing Qu, Guang Chen, Zhijun Li , Lijun Zhang, Fan Lu , Alois Knoll.
    Arxiv Pre-print, 2021
    [arXiv] [Code]

    In this paper, we propose an action-context modeling network termed ACM-Net, which integrates a three-branch attention module to measure the likelihood of each temporal point being action instance, context, or non-action background, simultaneously.

    Honors and Awards

  • 2022, 2021 : The Outstanding Doctoral Student Scholarship of Tongji University
  • 2020 : The Shanghai Outstanding Graduate
  • 2020 : The Second Prize of National Graduate Student Mathematical Modeling Contest
  • 2019 : The BaoGang Scholarship (宝钢教育奖)
  • 2018 : Rank 4th in 2018 Corolo-Cup of Germany Graduate Students

  • Website Template


    © Sanqing Qu | Last updated: Mar 22, 2024