๐จโ๐ป About Me
Iโm an incoming Ph.D. student in Computer Science at Nanyang Technological University (NTU), fortunate to be advised by Prof. Wenya Wang. Prior to this, I received my B.Eng. in Software Engineering from Tianjin University (TJU) and M.Eng. in Computer Science from Tsinghua University (THU).
Along the way, Iโve been fortunate to work with some amazing researchers and mentors. I worked as a research intern at University of Illinois Urbana-Champaign (UIUC) with Prof. Jiaxuan You, which turned out to be one of the most rewarding and enjoyable experiences I’ve had. I also interned as a research assistant at The Hong Kong University of Science and Technology (HKUST-GZ), where I had the pleasure of collaborating with Prof. Chengwei Qin. In addition, I spent over a year as a research intern at Microsoft Research Asia (MSRA).
Research Interests: My early research mainly focused on Graph Data Mining and AI4Sec, and you may find some of my representative work here: TFE-GNN@WWW'23, TGSL@CIKM'23, and MH-Net@AAAI'25.
๐ Currently, Iโm focusing on Large Language Models (LLMs) and their synergy with Graphs. Some of my recent works include: GoR@ACL'25 Main , Router-R1, and FusionBench.
๐ฌ Feel free to reach out via wazhz14 [AT] gmail [DOT] com โ Iโm always open to discussion and collaboration!
๐ฅ News
[2025.07] ๐๐ Introducing FusionBench: a systematic framework for fusing LLM capabilities from routing data across query, thought, and model levels.
[2025.06] ๐๐ Released Router-R1: an RL-driven, multi-round LLM router that balances accuracy, cost, and efficiency.
[2025.05] ๐๐ One paper (GoR) was accepted by ACL 2025 Main. See you in Vienna!
[2024.12] ๐๐ One paper (MH-Net) was accepted by AAAI 2025.
๐ Selected Publications [Full List]
(* denotes equal contribution)
Preprints
Fusing LLM Capabilities with Routing Data
Tao Feng*, Haozhen Zhang*, Zijie Lei, Pengrui Han, Mostofa Patwary, Mohammad Shoeybi, Bryan Catanzaro, and Jiaxuan You
arXiv 2507.10540, 2025
[arXiv]ย ย ย [Code]Router-R1: Teaching LLMs Multi-Round Routing and Aggregation via Reinforcement Learning
Haozhen Zhang, Tao Feng, and Jiaxuan You
arXiv 2506.09033, 2025
[arXiv]ย ย ย [Code]
Conference
Graph of Records: Boosting Retrieval Augmented Generation for Long-context Summarization with Graphs
Haozhen Zhang, Tao Feng, and Jiaxuan You
The 63rd Annual Meeting of the Association for Computational Linguistics (ACL Main), 2025
[arXiv]ย ย ย [Code]Revolutionizing Encrypted Traffic Classification with MH-Net: A Multi-View Heterogeneous Graph Model
Haozhen Zhang*, Haodong Yue*, Xi Xiao, Le Yu, Qing Li, Zhen Ling, and Ye Zhang
The 39th Annual AAAI Conference on Artificial Intelligence (AAAI), 2025
[arXiv]ย ย ย [Code]Time-aware Graph Structure Learning via Sequence Prediction on Temporal Graphs
Haozhen Zhang, Xueting Han, Xi Xiao, and Jing Bai
International Conference on Information and Knowledge Management (CIKM), 2023
[arXiv]ย ย ย [Code]TFE-GNN: A Temporal Fusion Encoder Using Graph Neural Networks for Fine-grained Encrypted Traffic Classification
Haozhen Zhang, Le Yu, Xi Xiao, Qing Li, Francesco Mercaldo, Xiapu Luo, and Qixu Liu
The Web Conference (WWW), 2023
[arXiv]ย ย ย [Code]
๐ Educations
Present: Ph.D. Student in Computer Science, Nanyang Technological University (NTU).
M.Eng. in Computer Science, Tsinghua University (THU).
B.Eng. in Software Engineering, Tianjin University (TJU).
๐ป Professional Experience
2025.05 - 2025.08, The Hong Kong University of Science and Technology (HKUST-GZ) - Research Assistant.
2024.06 - 2025.06, University of Illinois Urbana-Champaign (UIUC) - Affiliated Researcher.
2022.11 - 2024.02, Microsoft Research Asia (MSRA) - Research Intern.
๐ค Professional Service
Reviewer:
AAAI 2026
NeurIPs 2025
KDD 2024, ICLR 2024
๐ฏ Miscellaneous
I really enjoy playing badminton๐ธ! I was on the university or departmental team during both undergrad and grad school.
I also trained in table tennis๐ as a kid โ though it’s been quite a few years since I last picked up a paddle…๐
โ๏ธ Contact
Email: wazhz14 [AT] gmail [DOT] com
Feel free to reach out โ I’m always open to discussion and collaboration!