PhD in Computer Science
Rice University, 2020
Article
Conference paper
Position: Exploring the Robustness of Pipeline-Parallelism-Based Decentralized Training
Conference paper
Serving Deep Learning Models from Relational Databases
Conference paper
Auto-Differentiation of Relational Computations for Very Large Scale Machine Learning
Conference paper
CocktailSGD: Fine-tuning Foundation Models over 500Mbps Networks
Conference paper
Deja Vu: Contextual Sparsity for Efficient LLMs at Inference Time
Conference paper
High-throughput Generative Inference of Large Language Models with a Single GPU
Conference paper
Decentralized Training of Foundation Models in Heterogeneous Environments
Conference paper
Distributed Learning of Fully Connected Neural Networks using Independent Subnet Training
Conference paper
Efficient flow scheduling in distributed deep learning training with echelon formation
Conference paper
Fine-tuning Language Models over Slow Networks using Activation Quantization with Guarantees
Conference paper
In-Database Machine Learning with CorgiPile: Stochastic Gradient Descent without Full Data Shuffle
Conference paper
Conference paper
Article
Automatic Optimization of Matrix Implementations for Distributed Machine Learning and Linear Algebra
Conference paper
BAGUA: Scaling up Distributed Learning with System Relaxations
Conference paper
Lachesis: automatic partitioning for UDF-centric analytics
Conference paper
Tensor relational algebra for distributed machine learning system design
Conference paper
Article
Conference paper
PlinyCompute: A platform for high-performance, distributed, data-intensive tool development
Conference paper
Article
Effective video retargeting with jittery assessment
Article
Auto-Differentiation of Relational Computations for Very Large Scale Machine Learning
CocktailSGD: Fine-tuning Foundation Models over 500Mbps Networks
Deja Vu: Contextual Sparsity for Efficient LLMs at Inference Time
High-throughput Generative Inference of Large Language Models with a Single GPU
Decentralized Training of Foundation Models in Heterogeneous Environments
Distributed Learning of Fully Connected Neural Networks using Independent Subnet Training
Efficient flow scheduling in distributed deep learning training with echelon formation
Fine-tuning Language Models over Slow Networks using Activation Quantization with Guarantees
In-Database Machine Learning with CorgiPile: Stochastic Gradient Descent without Full Data Shuffle
Conference paper
PlinyCompute: A platform for high-performance, distributed, data-intensive tool development
Conference paper
Article
Effective video retargeting with jittery assessment
Article
No Publications |
COMP6211J | Advanced Large-Scale Machine Learning Systems for Foundation Models |
COMP4971A | Independent Work |
COMP4901Y | Large-Scale Machine Learning for Foundation Models |
No Teaching Assignments |
No Teaching Assignments |
No Teaching Assignments |
DING, Fangyu
Computer Science and Engineering
HE, Guangxin
Computer Science and Engineering
LI, Chenyue
Computer Science and Engineering
PENG, You
Computer Science and Engineering
QIU, Zipeng
Computer Science and Engineering
TAO, Wangcheng
Computer Science and Engineering
WANG, Jiashu
Computer Science and Engineering
ZHOU, Yukun
(co-supervision)
Computer Science and Engineering
BAI, Tianyi
Computer Science and Engineering
YAN, Ran
Computer Science and Engineering
KIM, Hyeonjae
Computer Science and Engineering
Update your browser to view this website correctly. Update your browser now