Asynchronous Federated Unlearning

Thanks to regulatory policies such as the General Data Protection Regulation (GDPR), it is essential to provide users with the right to erasure regarding their own private data, even if such data has been used to train a neural network model.
Read more →

Gradient Leakage in Production Federated Learning

As an emerging distributed machine learning paradigm, federated learning (FL) allows clients to train machine learning models collaboratively with private data, without transmitting them to the server. Though federated learning is celebrated as a privacy-preserving paradigm of training machine learning models, sharing gradients with the server may lead to the potential reconstruction of raw private data, such as images and texts, used in the training process.
Read more →

Privacy and Fairness in Model Partitioning

Deep learning sits at the forefront of many on-going advances in a variety of learning tasks. Despite its supremacy in accuracy under benign environments, Deep learning suffers from adversarial vulnerability and privacy leakage in adversarial environments.
Read more →

Distributed Inference of Deep Learning Models

Deep learning models are typically deployed at remote cloud servers and require users to upload local data for inference, incurring considerable overhead with respect to the time needed for transferring large volumes of data over the Internet.
Read more →

Multi-Modal Federated Learning on Non-IID Data

Existing work in federated learning focused on addressing uni-modal tasks, where training generally embraces one modality, such as images or texts. As a result, the global model is uni-modal, containing a modality-specific neural network structure, using samples from a specific modality as its input for training.
Read more →

Multi-Resource Scheduling

Queueing algorithms determine the order in which packets in various independent flows are processed, and serve as a fundamental mechanism for allocating resources in a network appliance. Traditional queueing algorithms make scheduling decisions in network switches that simply forward packets to their next hops, and link bandwidth is the only resource being allocated.
Read more →

Bandwidth Allocation in Datacenter Networks

Web service providers like Google and Facebook have built large scale datacenters to host many computationally intensive applications, ranging from PageRank to machine learning. In order to efficiently proceed a large volume of data, these applications typically embrace data parallel frameworks, such as MapReduce.
Read more →