Federated Gradient Averaging for Multi-Site Training with Momentum-Based Optimizers

Samuel W. Remedios*, John A. Butman, Bennett A. Landman, Dzung L. Pham

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

13 Scopus citations

Abstract

Multi-site training methods for artificial neural networks are of particular interest to the medical machine learning community primarily due to the difficulty of data sharing between institutions. However, contemporary multi-site techniques such as weight averaging and cyclic weight transfer make theoretical sacrifices to simplify implementation. In this paper, we implement federated gradient averaging (FGA), a variant of federated learning without data transfer that is mathematically equivalent to single site training with centralized data. We evaluate two scenarios: a simulated multi-site dataset for handwritten digit classification with MNIST and a real multi-site dataset with head CT hemorrhage segmentation. We compare federated gradient averaging to single site training, federated weight averaging (FWA), and cyclic weight transfer. In the MNIST task, we show that training with FGA results in a weight set equivalent to centralized single site training. In the hemorrhage segmentation task, we show that FGA achieves on average superior results to both FWA and cyclic weight transfer due to its ability to leverage momentum-based optimization.

Original languageEnglish
Title of host publicationDomain Adaptation and Representation Transfer, and Distributed and Collaborative Learning - 2nd MICCAI Workshop, DART 2020, and 1st MICCAI Workshop, DCL 2020, Held in Conjunction with MICCAI 2020, Proceedings
EditorsShadi Albarqouni, Spyridon Bakas, Konstantinos Kamnitsas, M. Jorge Cardoso, Bennett Landman, Wenqi Li, Fausto Milletari, Nicola Rieke, Holger Roth, Daguang Xu, Ziyue Xu
PublisherSpringer Science and Business Media Deutschland GmbH
Pages170-180
Number of pages11
ISBN (Print)9783030605476
DOIs
StatePublished - 2020
Event2nd MICCAI Workshop on Domain Adaptation and Representation Transfer, DART 2020, and the 1st MICCAI Workshop on Distributed and Collaborative Learning, DCL 2020, held in conjunction with the Medical Image Computing and Computer Assisted Intervention, MICCAI 2020 - Lima, Peru
Duration: 4 Oct 20208 Oct 2020

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume12444 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference2nd MICCAI Workshop on Domain Adaptation and Representation Transfer, DART 2020, and the 1st MICCAI Workshop on Distributed and Collaborative Learning, DCL 2020, held in conjunction with the Medical Image Computing and Computer Assisted Intervention, MICCAI 2020
Country/TerritoryPeru
CityLima
Period4/10/208/10/20

Keywords

  • Deep learning
  • Federated learning
  • Multi-site

Cite this