Hi, I am Hongduan Tian, a second-year Ph.D. student at Trustworthy Machine Learning and Reasoning (TMLR) Group in Department of Computer Science, Hong Kong Baptist University, advised by Dr. Bo Han and Dr. Feng Liu. Before that, I got my master degree from Nanjing University of Information Science and Technology (NUIST) and fortunately supervised by Prof. Xiao-Tong Yuan and Prof. Qingshan Liu.

Currently, my research interests mainly include few-shot/meta learning, transfer learning, model/knowledge editing and LLM Agents.

Please feel free to email me for research, collaborations, or a casual chat.

📣 News

  • $\frak{2024.09}$: Our paper “Mind the gap between prototypes and images in cross-domain finetuning” is accepted by NeurIPS 2024.
  • $\frak{2024.05}$: Our paper “MOKD: Cross-domain finetuning for few-shot classification via maximizing optimized kernel dependence” is accepted by ICML 2024.

📖 Educations

  • 2023.09 - present, Hong Kong Baptist University (HKBU), Ph.D. in Computer Science.
  • 2018.09 – 2021.06, Nanjing University of Information Science and Technology (NUIST), M.E. in Control Engineering.
  • 2014.09 - 2018.06, Nanjing University of Information Science and Technology (NUIST), B.E. in Automation.

📝 Publications

✉️ Corresponding author.

sym

Static Badge Mind the Gap Between Prototypes and Images in Cross-domain Finetuning.
[paper] [code] [slides] [poster] [CN-video] [EN-video]
Hongduan Tian, Feng Liu, Zhanke Zhou, Tongliang Liu, Chengqi Zhang, Bo Han✉️.

Quick Introduction In cross-domain few-shot classification (CFC), recent works mainly focus on adapting a simple transformation head on top of a frozen pre-trained backbone with few labeled data to project embeddings into a task-specific metric space where classification can be performed by measuring similarities between image instance and prototype representations. Technically, an assumption implicitly adopted in such a framework is that the prototype and image instance embeddings share the same representation transformation. However, in this paper, we find that there naturally exists a gap, which resembles the modality gap, between the prototype and image instance embeddings extracted from the frozen pre-trained backbone, and simply applying the same transformation during the adaptation phase constrains exploring the optimal representation distributions and shrinks the gap between prototype and image representations.

To solve this problem, we propose a simple yet effective method, contrastive prototype-image adaptation (CoPA), to adapt different transformations for prototypes and images similarly to CLIP by treating prototypes as text prompts.

Extensive experiments on Meta-Dataset demonstrate that CoPA achieves the state-of-the-art performance more efficiently. Meanwhile, further analyses also indicate that CoPA can learn better representation clusters, enlarge the gap, and achieve the minimum validation loss at the enlarged gap.
sym

Static Badge MOKD: Cross-domain Finetuning for Few-shot Classification via Maximizing Optimized Kernel Dependence.
[paper] [code] [slides] [poster] [CN-video] [EN-video]
Hongduan Tian, Feng Liu, Tongliang Liu, Bo Du, Yiu-ming Cheung, Bo Han✉️.

Quick Introduction In cross-domain few-shot classification, nearest centroid classifier (NCC) aims to learn representations to construct a metric space where few-shot classification can be performed by measuring the similarities between samples and the prototype of each class. An intuition behind NCC is that each sample is pulled closer to the class centroid it belongs to while pushed away from those of other classes. However, in this paper, we find that there exist high similarities between NCC-learned representations of two samples from different classes.

In order to address this problem, we propose a bi-level optimization framework, maximizing optimized kernel dependence (MOKD) to learn a set of class-specific representations that match the cluster structures indicated by labeled data of the given task. Specifically, MOKD first optimizes the kernel adopted in Hilbert-Schmidt independence criterion (HSIC) to obtain the optimized kernel HSIC (opt-HSIC) that can capture the dependence more precisely. Then, an optimization problem regarding the opt-HSIC is addressed to simultaneously maximize the dependence between representations and labels and minimize the dependence among all samples.

Extensive experiments on Meta-Dataset demonstrate that MOKD can not only achieve better generalization performance on unseen domains in most cases but also learn better data representation clusters.
sym

Static Badge Meta-learning with network pruning.
[paper] [code]
Hongduan Tian✉️, Bo Liu, Xiao-Tong Yuan✉️, Qingshan Liu.

Quick Introduction Meta-learning is a powerful paradigm for few-shot learning. Although with remarkable success witnessed in many applications, the existing optimization based meta-learning models with over-parameterized neural networks have been evidenced to ovetfit on training tasks.

To remedy this deficiency, we propose a network pruning based meta-learning approach for overfitting reduction via explicitly controlling the capacity of network. A uniform concentration analysis reveals the benefit of network capacity constraint for reducing generalization gap of the proposed meta-learner. We have implemented our approach on top of Reptile assembled with two network pruning routines: Dense-Sparse-Dense (DSD) and Iterative Hard Thresholding (IHT).

Extensive experimental results on benchmark datasets with different over-parameterized deep networks demonstrate that our method not only effectively alleviates meta-overfitting but also in many cases improves the overall generalization performance when applied to few-shot classification tasks.

🎖 Awards

  • 2024.11, Research Performance Award, HKBU CS Department.
  • 2024.10, NeurIPS Scholar Award.

💻 Services

  • Conference Reviewer for ICML (22-24), NeurIPS (22-24), ICLR (22-25), AISTATS (25).
  • Journal Reviewer for TPAMI, TNNLS, TMLR, NEUNET.

🏫 Teaching

  • 2024 Fall, TA for COMP 7180: Quantitative Methods for Data Analytics and Artificial Intelligence
  • 2024 Spring, TA for COMP7940: Cloud Computing.

📖 Academic Experiences

  • 2023.09 - present, PhD student @HKBU-TMLR Group, advised by Dr. Bo Han.
  • 2022.07 - 2023.05, Research intern @HKBU-TMLR Group, advised by Dr. Bo Han and Dr. Feng Liu.

🏢 Industrial Experiences

  • 2022.07 - Present, Research Intern @NVIDIA NVAITC, host by Charles Cheung.
  • 2024.06 - 2024.08, Research Intern @WeChat
  • 2023.07 - 2023.08, Research Intern @Alibaba.
  • 2021.07 - 2022.07, Algorithm Engineer @ZTE Nanjing Research and Development Center.