Research
I study efficient and flexible sequence models.
* indicates equal contribution.
|
|
Dream 7B: Scalable Diffusion Language Models
Jiacheng Ye*,
Zhihui Xie*,
Lin Zheng*,
Jiahui Gao*,
Zirui Wu,
Xin Jiang,
Zhenguo Li,
Lingpeng Kong
2025
blog /
code
|
EvaByte: Efficient Byte-level Language Models at Scale
Lin Zheng,
Xueliang Zhao,
Guangtao Wang,
Chen Wu,
David Dong,
Angela Wang,
Mingran Wang,
Yun Du,
Haige Bo,
Amol Sharma,
Bo Li,
Kejie Zhang,
Changran Hu,
Urmish Thakker,
Lingpeng Kong
2025
blog /
code
|
Beyond Autoregression: Discrete Diffusion for Complex Reasoning and Planning
Jiacheng Ye,
Jiahui Gao,
Shansan Gong,
Lin Zheng,
Xin Jiang,
Zhenguo Li,
Lingpeng Kong
ICLR, 2025
paper /
code
|
Scaling Diffusion Language Models via Adaptation from Autoregressive Models
Shansan Gong*,
Shivam Agarwal*,
Yizhe Zhang,
Jiacheng Ye,
Lin Zheng,
Mukai Li,
Chenxin An,
Peilin Zhao,
Wei Bi,
Hao Peng,
Jiawei Han,
Lingpeng Kong
ICLR, 2025
paper /
code
|
Diffusion of Thoughts: Chain-of-Thought Reasoning in Diffusion Language Models
Jiacheng Ye*,
Shansan Gong*,
Liheng Chen*,
Lin Zheng,
Jiahui Gao,
Han Shi,
Chuan Wu,
Zhenguo Li,
Wei Bi,
Lingpeng Kong
NeurIPS, 2024
paper /
code
|
Self-Infilling Code Generation
Lin Zheng
Jianbo Yuan,
Zhi Zhang,
Hongxia Yang,
Lingpeng Kong
ICML, 2024
paper
/
code
|
A Reparameterized Discrete Diffusion Model for Text Generation
Lin Zheng
Jianbo Yuan,
Lei Yu,
Lingpeng Kong
COLM, 2024
paper /
code
|
Retrieved Sequence Augmentation for Protein Representation Learning
Chang Ma,
Haiteng Zhao,
Lin Zheng,
Jiayi Xin,
Qintong Li,
Lijun Wu,
Zhihong Deng,
Yang Lu,
Qi Liu,
Lingpeng Kong
arXiv preprint, 2023
paper /
code
|
Attentive Multi-Layer Perceptron for Non-autoregressive Generation
Shuyang Jiang,
Jun Zhang,
Jiangtao Feng,
Lin Zheng,
Lingpeng Kong
ECML/PKDD, 2023
paper /
code
|
Linear Attention via Orthogonal Memory
Jun Zhang,
Shuyang Jiang,
Jiangtao Feng,
Lin Zheng,
Lingpeng Kong
arXiv preprint, 2023
paper
|
CAB: Comprehensive Attention Benchmarking on Long Sequence Modeling
Jun Zhang,
Shuyang Jiang,
Jiangtao Feng,
Lin Zheng,
Lingpeng Kong
ICML, 2023
paper /
code
|
Efficient Attention via Control Variates
Lin Zheng,
Jianbo Yuan,
Chong Wang,
Lingpeng Kong
ICLR, 2023 (oral)
paper /
code
|
Linear Complexity Randomized Self-attention Mechanism
Lin Zheng,
Chong Wang,
Lingpeng Kong
ICML, 2022
paper /
code
|
Ripple Attention for Visual Perception with Sub-quadratic Complexity
Lin Zheng,
Huijie Pan,
Lingpeng Kong
ICML, 2022
paper
|
Cascaded Head-colliding Attention
Lin Zheng,
Zhiyong Wu,
Lingpeng Kong
ACL, 2021
paper /
code
|
Generative Semantic Hashing Enhanced via Boltzmann Machines
Lin Zheng,
Qinliang Su,
Dinghan Shen,
Changyou Chen
ACL, 2020
paper /
code
|
Teaching
Spring 2022 and Spring 2023: TA for COMP3314B Machine Learning
|
Service
Reviewer: ACL 2021, NAACL 2021, ICML 2022-2024, NeurIPS 2022-2023, etc.
|
|