Upcoming Events
Past Events
2022-03-31 Talk on "Wireless Federated Learning" at 2021 IEEE SPS Cycle 2 School on Networked Federated Learning, online event.
2021-09-27 Tutorial on "Wireless Federated Learning" at IEEE SPAWC 2021, online event.
2021-09-19 Special Session on "Neural Network Compression and Compact Deep Features: From Methods to Standards" at IEEE ICIP 2021, online event.
2020-12-17 Talk on "Recent Advances in Federated Learning for Communication" ITU AI/ML in 5G Challenge, online event, 2020.
2020-12-11 Tutorial on "Distributed Deep Learning: Concepts, Methods & Applications in Wireless Networks" IEEE GLOBECOM 2020 in Taipe, Taiwan.
2020-12-05 Talk on "DeepCABAC: A Universal Compression Algorithm for Deep Neural Networks" Workshop on Energy Efficient Machine Learning and Cognitive Computing, online event, 2020.
2020-08-21 Talk on "A Universal Compression Algorithm for Deep Neural Networks" AI for Good Global Summit 2020 in Geneva, Switzerland.
2020-06-15 Workshop on "Efficient Deep Learning for Computer Vision" IEEE CVPR 2020 in Seattle, USA.
2020-05-05 Special Session on "Distributed Machine Learning on Wireless Networks" at IEEE ICASSP 2020 in Barcelona, Spain.
2020-05-04 Tutorial on "Distributed and Efficient Deep Learning" IEEE ICASSP 2020 in Barcelona, Spain.
2019-10-08 Talk at ITU Workshop on "The future of media" in Geneva, Switzerland.
2018-04-25 Talk at ITU Workshop on "Impact of AI on ICT Infrastructures" in Xi'an, China.

This webpage aims to regroup publications and software produced as part of a project at Fraunhofer HHI on developing new method for federated and efficient deep learning.

Why Neural Network Compression ?

State-of-the-art machine learning models such as deep neural networks are known to work excellently in practice. However, since the training and execution of these models require extensive computational resources, they may not be applicable in communications systems with limited storage capabilities, computational power and energy resources, e.g., smartphones, embedded systems or IoT devices. Our research addresses this problem and focuses on the development of techniques for reducing the complexity and increasing the execution efficiency of deep neural networks.

Why Federated Learning ?

Large deep neural networks are trained on huge data corpora. Therefore, distributed training schemes are becoming increasingly relevant. A major issue in distributed training is the limited communication bandwidth between contributing nodes or prohibitive communication cost in general. In our research we investigate new methods for reducing the communication cost for distributed training. This includes techniques of communication delay and gradient sparsification as well as optimal weight update encoding. Our results show that the upstream communication can be reduced by more than four orders of magnitude without significantly harming the convergence speed.

Software

Tutorials

Publications

Efficient Deep Learning

Federated Learning

Contributions to Standardization