<template>
  <div class="container mt-5">
    <div class="post">
      <header class="post-header">
        <h1 class="post-title">Research</h1>
        <p class="post-description"></p>
      </header>
      <article>
        <center>
          <img src="../assets/img/research.png" style="width: 85%" />
        </center>
        <p>
          The long-term research goal is to build robust models for modern AI,
          such as pre-trained models and large models. We create new theory,
          algorithms, applications, and open-sourced library to achieve our
          goal. These days, we are specifically interested in robustness in
          large language models (LLMs).
        </p>
        <p>
          Our research consists of the following topics with selected
          publications: [<a href="#" target="_blank" rel="noopener noreferrer"
            >View by year</a
          >]
        </p>
        <h5 id="new-large-models">New: large models</h5>
        <ul>
          <li>
            [arXiv’23]
            <a href="#" target="_blank" rel="noopener noreferrer"
              >PromptBench: Towards Evaluating the Robustness of Large Language
              Models on Adversarial Prompts</a
            >. Kaijie Zhu, Jindong Wang, Jiaheng Zhou, Zichen Wang, Hao Chen,
            Yidong Wang, Linyi Yang, Wei Ye, Neil Zhenqiang Gong, Yue Zhang,
            Xing Xie. [<a href="#" target="_blank" rel="noopener noreferrer"
              >code</a
            >]
          </li>
          <li>
            [arXiv’23]
            <a href="#" target="_blank" rel="noopener noreferrer"
              >PandaLM: An Automatic Evaluation Benchmark for LLM Instruction
              Tuning Optimization</a
            >. Yidong Wang, Zhuohao Yu, Zhengran Zeng, Linyi Yang, Cunxiang
            Wang, Hao Chen, Chaoya Jiang, Rui Xie, Jindong Wang, Xing Xie, Wei
            Ye, Shikun Zhang, Yue Zhang. [<a
              href="#"
              target="_blank"
              rel="noopener noreferrer"
              >code</a
            >]
          </li>
          <li>
            <strong>[ICLR’23 large model workshop]</strong>
            <a href="#" target="_blank" rel="noopener noreferrer"
              >On the Robustness of ChatGPT: An Adversarial and
              Out-of-distribution Perspective</a
            >. Jindong Wang, Xixu Hu, Wenxin Hou, Hao Chen, Runkai Zheng, Yidong
            Wang, Linyi Yang, Haojun Huang, Wei Ye, Xiubo Geng, Binxin Jiao, Yue
            Zhang, and Xing Xie.
          </li>
          <li>
            [arXiv’23]
            <a href="#" target="_blank" rel="noopener noreferrer"
              >Exploring Vision-Language Models for Imbalanced Learning</a
            >. Yidong Wang, Zhuohao Yu, Jindong Wang, Qiang Heng, Hao Chen, Wei
            Ye, Rui Xie, Xing Xie, Shikun Zhang. [<a
              href="#"
              target="_blank"
              rel="noopener noreferrer"
              >code</a
            >]
          </li>
        </ul>
        <h5
          id="out-of-distribution-domain-generalization-and-adaptation-for-distribution-shift"
        >
          Out-of-distribution (Domain) generalization and adaptation for
          distribution shift
        </h5>
        <ul>
          <li>
            <strong>[ICLR’23]</strong>
            <a href="#" target="_blank" rel="noopener noreferrer"
              >Out-of-distribution Representation Learning for Time Series
              Classification</a
            >. Wang Lu, Jindong Wang, Xinwei Sun, Yiqiang Chen, and Xing Xie.
          </li>
          <li>
            <strong>[KDD’23]</strong>
            <a href="#" target="_blank" rel="noopener noreferrer"
              >Domain-Specific Risk Minimization for Out-of-Distribution
              Generalization</a
            >. YiFan Zhang, Jindong Wang, Jian Liang, Zhang Zhang, Baosheng Yu,
            Liang Wang, Xing Xie, and Dacheng Tao.
          </li>
          <li>
            <strong>[KDD’23]</strong>
            <a href="#"
              >Generalizable Low-Resource Activity Recognition with Diverse and
              Discriminative Representation Learning</a
            >. Xin Qin, Jindong Wang, Shuo Ma, Wang Lu, Yongchun Zhu, Xing Xie,
            Yiqiang Chen.
          </li>
          <li>
            <strong>[ACL’23 findings]</strong>
            <a href="#" target="_blank" rel="noopener noreferrer"
              >GLUE-X: Evaluating Natural Language Understanding Models from an
              Out-of-distribution Generalization Perspective</a
            >. Linyi Yang, Shuibai Zhang, Libo Qin, Yafu Li, Yidong Wang,
            Hanmeng Liu, Jindong Wang, Xing Xie, Yue Zhang.
          </li>
          <li>
            <strong>[TKDE’22]</strong>
            <a href="#" target="_blank" rel="noopener noreferrer"
              >Generalizing to Unseen Domains: A Survey on Domain
              Generalization</a
            >. Jindong Wang, Cuiling Lan, Chang Liu, Yidong Ouyang, Tao Qin,
            Wang Lu, Yiqiang Chen, Wenjun Zeng, and Philip S. Yu.
          </li>
          <li>
            <strong>[TMLR’22]</strong>
            <a href="#" target="_blank" rel="noopener noreferrer"
              >Domain-invariant Feature Exploration for Domain Generalization</a
            >. Wang Lu, Jindong Wang, Haoliang Li, Yiqiang Chen, and Xing Xie.
          </li>
          <li>
            <strong>[UbiComp’22]</strong>
            <a href="#" target="_blank" rel="noopener noreferrer"
              >Semantic-Discriminative Mixup for Generalizable Sensor-based
              Cross-domain Activity Recognition</a
            >. Wang Lu, Jindong Wang, Yiqiang Chen, Sinno Pan, Chunyu Hu, and
            Xin Qin.
          </li>
          <li>
            <strong>[NeurIPS’21]</strong>
            <a href="#" target="_blank" rel="noopener noreferrer"
              >Learning causal semantic representation for out-of-distribution
              prediction</a
            >. Chang Liu, Xinwei Sun, Jindong Wang , Haoyue Tang, Tao Li, Tao
            Qin, Wei Chen, and Tie-Yan Liu.
          </li>
          <li>
            <strong>[CIKM’21]</strong>
            <a href="#" target="_blank" rel="noopener noreferrer"
              >Adarnn: Adaptive learning and forecasting of time series</a
            >. Yuntao Du, Jindong Wang, Wenjie Feng, Sinno Pan, Tao Qin, Renjun
            Xu, and Chongjun Wang.
          </li>
          <li>
            <strong>[TNNLS’20, 300 + citations]</strong>
            <a href="#" target="_blank" rel="noopener noreferrer"
              >Deep subdomain adaptation network for image classification</a
            >. Yongchun Zhu, Fuzhen Zhuang, Jindong Wang, Guolin Ke, Jingwu
            Chen, Jiang Bian, Hui Xiong, and Qing He.
          </li>
          <li>
            <strong>[ACMMM’18, 400+ citations]</strong>
            <a href="#" target="_blank" rel="noopener noreferrer"
              >Visual domain adaptation with manifold embedded distribution
              alignment</a
            >. Jindong Wang, Wenjie Feng, Yiqiang Chen, Han Yu, Meiyu Huang, and
            Philip S Yu.
          </li>
          <li>
            <strong>[ICDM’17, 400+ citations]</strong>
            <a href="#" target="_blank" rel="noopener noreferrer"
              >Balanced distribution adaptation for transfer learning</a
            >. Jindong Wang, Yiqiang Chen, Shuji Hao, Wenjie Feng, and Zhiqi
            Shen.
          </li>
          <li>
            Open-source:
            <ul>
              <li>
                <a href="#" target="_blank" rel="noopener noreferrer"
                  >Transfer learning</a
                >
                <a href="#" target="_blank" rel="noopener noreferrer"
                  ><img
                    src="https://img.shields.io/github/stars/jindongwang/transferlearning?style=social"
                    alt="Transfer learning repo"
                /></a>
              </li>
              <li>
                robustlearn: A unified repo for robust machine learning, such as
                OOD and adversarial robustness:
                <a href="#" target="_blank" rel="noopener noreferrer"
                  >robustlearn</a
                >
                <a href="#" target="_blank" rel="noopener noreferrer"
                  ><img
                    src="https://img.shields.io/github/stars/microsoft/robustlearn?style=social"
                    alt="robustlearn"
                /></a>
              </li>
              <li>
                PandaLM:
                <a href="#" target="_blank" rel="noopener noreferrer"
                  >PandaLM</a
                >
                <a href="#" target="_blank" rel="noopener noreferrer"
                  ><img
                    src="https://img.shields.io/github/stars/WeOpenML/PandaLM?style=social"
                    alt="robustlearn"
                /></a>
              </li>
            </ul>
          </li>
        </ul>
        <h5 id="semi-supervised-learning-for-low-resource-learning">
          Semi-supervised learning for low-resource learning
        </h5>
        <ul>
          <li>
            <strong>[ICLR’23]</strong>
            <a href="#" target="_blank" rel="noopener noreferrer"
              >FreeMatch: Self-adaptive Thresholding for Semi-supervised
              Learning</a
            >. Yidong Wang, Hao Chen, Qiang Heng, Wenxin Hou, Yue Fan, Zhen Wu,
            Jindong Wang, Marios Savvides, Takahiro Shinozaki, Bhiksha Raj,
            Bernt Schiele, and Xing Xie.
          </li>
          <li>
            <strong>[ICLR’23]</strong>
            <a href="#" target="_blank" rel="noopener noreferrer"
              >SoftMatch: Addressing the Quantity-Quality Tradeoff in
              Semi-supervised Learning</a
            >. Hao Chen, Ran Tao, Yue Fan, Yidong Wang, Jindong Wang, Bernt
            Schiele, Xing Xie, Bhiksha Raj, and Marios Savvides.
          </li>
          <li>
            <strong>[NeurIPS’22]</strong>
            <a href="#" target="_blank" rel="noopener noreferrer"
              >USB: A Unified Semi-supervised Learning Benchmark</a
            >. Yidong Wang, Hao Chen, Yue Fan, Wang Sun, Ran Tao, Wenxin Hou,
            Renjie Wang, Linyi Yang, Zhi Zhou, Lan-Zhe Guo, Heli Qi, Zhen Wu,
            Yu-Feng Li, Satoshi Nakamura, Wei Ye, Marios Savvides, Bhiksha Raj,
            Takahiro Shinozaki, Bernt Schiele, Jindong Wang, Xing Xie, and Yue
            Zhang.
          </li>
          <li>
            <strong>[TASLP’22]</strong>
            <a href="#" target="_blank" rel="noopener noreferrer"
              >Exploiting Adapters for Cross-lingual Low-resource Speech
              Recognition</a
            >. Wenxin Hou, Han Zhu, Yidong Wang, Jindong Wang, Tao Qin, Renjun
            Xu, and Takahiro Shinozaki.
          </li>
          <li>
            <strong>[NeurIPS’21, 200+ citations]</strong>
            <a href="#" target="_blank" rel="noopener noreferrer"
              >Flexmatch: Boosting semi-supervised learning with curriculum
              pseudo labeling</a
            >. Bowen Zhang, Yidong Wang, Wenxin Hou, Hao Wu, Jindong Wang,
            Manabu Okumura, and Takahiro Shinozaki.
          </li>
          <li>
            Open-source:
            <ul>
              <li>
                USB: A unified semi-supervised learning toolbox for CV, NLP, and
                Audio:
                <a href="#" target="_blank" rel="noopener noreferrer">USB</a>
                <a href="#" target="_blank" rel="noopener noreferrer"
                  ><img
                    src="https://img.shields.io/github/stars/microsoft/semi-supervised-learning?style=social"
                    alt="USB"
                /></a>
              </li>
              <li>
                A unified Pytorch-based semi-supervised learning library: **<a
                  href="#"
                  target="_blank"
                  rel="noopener noreferrer"
                  >TorchSSL</a
                >
                <a href="#" target="_blank" rel="noopener noreferrer"
                  ><img
                    src="https://img.shields.io/github/stars/torchssl/torchssl?style=social"
                    alt="SSL repo"
                /></a>
              </li>
            </ul>
          </li>
        </ul>
        <h5 id="safe-transfer-learning-for-security">
          Safe transfer learning for security
        </h5>
        <ul>
          <li>
            <strong>[ICSE’22]</strong>
            <a href="#" target="_blank" rel="noopener noreferrer"
              >ReMoS: Reducing Defect Inheritance in Transfer Learning via
              Relevant Model Slicing</a
            >. Ziqi Zhang, Yuanchun Li, Jindong Wang, Bingyan Liu, Ding Li,
            Xiangqun Chen, Yao Guo, and Yunxin Liu.
          </li>
          <li>
            <strong>[IEEE TBD’22]</strong>
            <a href="#" target="_blank" rel="noopener noreferrer"
              >Personalized Federated Learning with Adaptive Batchnorm for
              Healthcare</a
            >. Wang Lu, Jindong Wang, Yiqiang Chen, Xin Qin, Renjun Xu,
            Dimitrios Dimitriadis, and Tao Qin.
          </li>
          <li>
            <strong>[TKDE’22]</strong>
            <a href="#" target="_blank" rel="noopener noreferrer"
              >Unsupervised deep anomaly detection for multi-sensor time-series
              signals</a
            >. Yuxin Zhang, Yiqiang Chen, Jindong Wang, and Zhiwen Pan.
          </li>
          <li>
            <strong>[IntSys’22, 400+ citations]</strong>
            <a href="#" target="_blank" rel="noopener noreferrer"
              >Fedhealth: A federated transfer learning framework for wearable
              healthcare</a
            >. Yiqiang Chen, Xin Qin, Jindong Wang, Chaohui Yu, and Wen Gao.
          </li>
          <li>
            Open-source:
            <ul>
              <li>
                PersonalizedFL: a personalized federated learning libraty:
                <a href="#" target="_blank" rel="noopener noreferrer"
                  >PersonalizedFL</a
                >
                <a href="#" target="_blank" rel="noopener noreferrer"
                  ><img
                    src="https://img.shields.io/github/stars/microsoft/personalizedfl?style=social"
                    alt="PersonalizedFL"
                /></a>
              </li>
              <li>
                robustlearn: A unified repo for robust machine learning, such as
                OOD and adversarial robustness:
                <a href="#" target="_blank" rel="noopener noreferrer"
                  >robustlearn</a
                >
                <a href="#" target="_blank" rel="noopener noreferrer"
                  ><img
                    src="https://img.shields.io/github/stars/microsoft/robustlearn?style=social"
                    alt="robustlearn"
                /></a>
              </li>
            </ul>
          </li>
        </ul>
        <h5 id="imbalanced-learning-for-long-tailed-tasks">
          Imbalanced learning for long-tailed tasks
        </h5>
        <ul>
          <li>
            <strong>[arXiv’23]</strong>
            <a href="#" target="_blank" rel="noopener noreferrer"
              >Exploring Vision-Language Models for Imbalanced Learning</a
            >. Yidong Wang, Zhuohao Yu, Jindong Wang, Qiang Heng, Hao Chen, Wei
            Ye, Rui Xie, Xing Xie, Shikun Zhang.
          </li>
          <li>
            <strong>[ACML’22]</strong>
            <a href="#" target="_blank" rel="noopener noreferrer"
              >Margin Calibration for Long-Tailed Visual Recognition</a
            >. Yidong Wang, Bowen Zhang, Wenxin Hou, Zhen Wu, Jindong Wang, and
            Takahiro Shinozaki.
          </li>
          <li>
            Open-source:
            <ul>
              <li>
                Imbalance-VLM: a library for imbalanced learning in
                vision-language models. [<a
                  href="#"
                  target="_blank"
                  rel="noopener noreferrer"
                  >Imbalance-VLM</a
                >]
              </li>
            </ul>
          </li>
        </ul>
        <h5 id="miscellaneous">Miscellaneous</h5>
        <ol>
          <li>
            An easy-to-use speech recognition toolkit based on Espnet:
            <a href="#" target="_blank" rel="noopener noreferrer">EasyESPNet</a>
          </li>
          <li>
            Leading the transfer learning tutorial (迁移学习简明手册) on Github:
            <a href="#" target="_blank" rel="noopener noreferrer">Tutorial</a>
          </li>
          <li>
            I’m also leading other popular research projects:
            <a href="#" target="_blank" rel="noopener noreferrer"
              >Machine learning</a
            >,
            <a href="#" target="_blank" rel="noopener noreferrer"
              >Activity recognition</a
            >
          </li>
          <li>
            I started a software studio <em>Pivot Studio</em> and made many
            applications in 2010-2014:
            <img src="../assets/img/logo.png" width="100" />
            <a href="#" target="_blank" rel="noopener noreferrer"
              >Our applications</a
            >
          </li>
        </ol>
      </article>
    </div>
  </div>
</template>

<script>
export default {};
</script>

<style>
</style>