Xiaohan Chen's Homepage
Home
Publications
Talks
Contact
CV
Publications
Type
Conference paper
Journal article
Preprint
Date
2021
2020
2019
2018
Hyperparameter Tuning is All You Need for LISTA
Learned Iterative Shrinkage-Thresholding Algorithm (LISTA) introduces the concept of unfolding an iterative algorithm and trains it …
*
X. Chen
,
*J. Liu
,
Z. Wang
,
W. Yin
The Elastic Lottery Ticket Hypothesis
Lottery Ticket Hypothesis (LTH) raises keen attention to identifying sparse trainable subnetworks, or winning tickets, of training, …
X. Chen
,
Y. Cheng
,
S. Wang
,
Z. Gan
,
J. Liu
,
Z. Wang
Preprint
PDF
Code
Sparse Training via Boosting Pruning Plasticity with Neuroregeneration
Works on lottery ticket hypothesis (LTH) and single-shot network pruning (SNIP) have raised a lot of attention currently on …
S. Liu
,
T. Chen
,
X. Chen
,
Z. Atashgahi
,
L. Yin
,
H. Kou
,
L. Shen
,
M. Pechenizkiy
,
Z. Wang
,
D. M. Mocanu
Sanity Checks for Lottery Tickets: Does Your Winning Ticket Really Win the Jackpot?
There have been long-standing controversies and inconsistencies over the experiment setup and criteria for identifying the …
X. Ma
,
G. Yuan
,
X. Shen
,
T. Chen
,
X. Chen
,
X. Chen
,
N. Liu
,
M. Qin
,
S. Liu
,
Z. Wang
,
Y. Wang
EarlyBERT: Efficient BERT Training via Early-bird Lottery Tickets
Heavily overparameterized language models such as BERT, XLNet and T5 have achieved impressive success in many NLP tasks. However, their …
X. Chen
,
Y. Cheng
,
S. Wang
,
Z. Gan
,
Z. Wang
,
J. Liu
Preprint
PDF
Code
Slides
Learning to Optimize: A Primer and A Benchmark
Learning to optimize (L2O) is an emerging approach that leverages machine learning to develop optimization methods, aiming at reducing …
T. Chen
,
X. Chen
,
W. Chen
,
H. Heaton
,
J. Liu
,
Z. Wang
,
W. Yin
Preprint
PDF
Code
A Design Space Study for LISTA and Beyond
In recent years, great success has been witnessed in building problem-specific deep networks from unrolling iterative algorithms, for …
*T. Meng
,
*
X. Chen
,
Y. Jiang
,
Z. Wang
Project
Learning A Minimax Optimizer: A Pilot Study
Solving continuous minimax optimization is of extensive practical interest, yet notoriously unstable and difficult. This paper …
*J. Shen
,
*
X. Chen
,
H. Heaton
,
T. Chen
,
J. Liu
,
Z. Wang
,
Y. Lin
Project
SmartDeal: Re-Modeling Deep Network Weights for Efficient Inference and Training
The record-breaking performance of deep neural networks (DNNs) comes with heavy parameterization, leading to external dynamic …
*
X. Chen
,
*Y. Zhao
,
Y. Wang
,
P. Xu
,
H. You
,
C. Li
,
Y. Fu
,
Y. Lin
,
Z. Wang
ShiftAddNet: A Hardware-Inspired Deep Network
Multiplication (e.g., convolution) is arguably a cornerstone of modern deep neural networks (DNNs). However, intensive multiplications …
H. You
,
X. Chen
,
Y. Zhang
,
C. Li
,
S. Li
,
Z. Liu
,
Z. Wang
,
Y. Lin
MATE: Plugging in Model Awareness to Task Embedding for Meta Learning
Meta-learning improves generalization of machine learning models when faced with previously unseen tasks by leveraging experiences from …
X. Chen
,
Z. Wang
,
S. Tang
,
K. Muandet
Safeguarded Learned Convex Optimization
Many applications require repeatedly solving a certain type of optimization problem, eachtime with new (but similar) data. Data-driven …
H. Heaton
,
X. Chen
,
Z. Wang
,
W. Yin
Preprint
Uncertainty Quantification for Deep Context-Aware Mobile Activity Recognition and Unknown Context Discovery
Activity recognition in wearable computing faces two key challenges: i) activity characteristics may be context-dependent and change …
Z. Huo
,
A. Pakbin
,
X. Chen
,
N. Hurley
,
Y. Yuan
,
X. Qian
,
Z. Wang
,
S. Huang
,
B. Mortazavi
SmartExchange: Trading Higher-cost Memory Storage/Access for Lower-cost Computation
We present SmartExchange, a hardware-algorithm co-design framework to trade higher cost memory storage/access for lower cost …
*
X. Chen
,
*Y. Zhao
,
Y. Wang
,
C. Li
,
Y. Xie
,
Z. Wang
,
Y. Lin
Drawing Early-Bird Tickets: Toward More Efficient Training of Deep Networks
(Frankle & Carbin, 2019) shows that there exist winning tickets (small but critical subnetworks) for dense, randomly initialized …
H. You
,
C. Li
,
P. Xu
,
Y. Fu
,
X. Chen
,
Y. Lin
,
Z. Wang
,
R. Baraniuk
Preprint
PDF
Project
E2-Train: Energy-Efficient Deep Network Training with Data-, Model-, and Algorithm-Level Saving
Convolutional neural networks (CNNs) have been increasingly deployed to Internet of Things (IoT) devices. Hence, many efforts have been …
*
X. Chen
,
*Z. Jiang
,
*Y. Wang
,
P. Xu
,
Y. Zhao
,
Y. Lin
,
Z. Wang
Preprint
PDF
Code
Project
Plug-and-Play Methods Provably Converge with Properly Trained Denoisers
E. Ryu
,
J. Liu
,
S. Wang
,
X. Chen
,
Z. Wang
,
W. Yin
Preprint
PDF
Code
Project
Slides
Video
ALISTA: Analytic Weights Are As Good As Learned Weights in LISTA
Deep neural networks based on unfolding an iterative algorithm, for example, LISTA (learned iterative shrinkage thresholding …
*
X. Chen
,
*J. Liu
,
Z. Wang
,
W. Yin
PDF
Code
Project
Slides
Theoretical Linear Convergence of Unfolded ISTA and its Practical Weights and Thresholds
In recent years, unfolding iterative algorithms as neural networks has become an empirical success in solving sparse recovery problems. …
*
X. Chen
,
*J. Liu
,
Z. Wang
,
W. Yin
Preprint
PDF
Code
Poster
Slides
Video
Can We Gain More from Orthogonality Regularizations in Training Deep Networks?
N. Bansal
,
X. Chen
,
Z. Wang
Preprint
PDF
Code
Poster
Video
Cite
×