[4Yin2-46] Gradient-Based Communication Network Optimization for Fully Decentralized Learning
Keywords:Distributed Learning, Hyper parameter optimization, Stochastic Gradient Descent, Federated Learning
We propose a gradient-based communication network optimization algorithm for fully decentralized learning.
Our algorithm traces the gradients of network edge weights throughout the training in a fully decentralized manner.
We applied the proposed algorithm to convergence acceleration and evaluated its performance by simulation experiments.
Our algorithm traces the gradients of network edge weights throughout the training in a fully decentralized manner.
We applied the proposed algorithm to convergence acceleration and evaluated its performance by simulation experiments.
Authentication for paper PDF access
A password is required to view paper PDFs. If you are a registered participant, please log on the site from Participant Log In.
You could view the PDF with entering the PDF viewing password bellow.