Short Bio

I am a senior research scientist in Google DeepMind, working on large language model (LLM). I completed my Ph.D. at University of Illinois, Urbana-Champaign (UIUC), advised by Prof. Jiawei Han.

What's new!

June 2025 - The Gemini 2.5 technical report is publicly released. My contributions center on enhancing Gemini's tool use capabilities.

Jan. 2025 - Two papers on Reward Model Training and Multi-turn Iterative Preference Optimization are accepted in ICLR 2025.

Jan. 2025 - One paper on LLM-enhanced Hierarchical Text Classification is accepted in WebConf 2025.

Jan. 2025 - One paper on Listwise Preference Optimization is accepted in NAACL 2025.

Oct. 2024 - One paper about Multilingual Fine-grained News Headline Hallucination Detection is accepted in EMNLP 2024 (Finding).

May 2024 - Two papers about (1) Explanation-enhanced LLM In-context Learning, and (2) Text Preference Prediction are accepted in ACL 2024 (Main).

May 2024 - One paper on LLM Distillation is accepted in ACL 2024 (Findings).

May 2024 - One paper on Knowledge Distillation with Perturbed Loss is accepted in KDD 2024.

May 2024 - One paper on LLM-based Text Ranker is accepted in NAACL 2024.

Sept. 2023 - One paper on LLM-based Attributed Training Data Generation is accepted in NeurIPS 2023 Dataset and Benchmark Track.

Area of Interests

My primary areas of interests in research include:

  • Large Language Model
  • Reinforcement Learning
  • Multi-agent System

I have also worked on:

  • Taxonomy/KG Construction
  • Semantic Search/Ranking
  • Topic Modeling

Contact

Email mickeysjm[at]gmail.com