Skip to content

This work presents FL challenges based on the simulation of a HAR application. As a solution, two data sharing strategies were analyzed in order to reduce the asymmetry of device data distribution.

Notifications You must be signed in to change notification settings

LABORA-INF-UFG/DS-FL

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Data sharing-based approach for Federated Learning tasks on Edge Devices

Federated Learning (FL) allows edge devices to collaboratively train a global machine learning model. In this paradigm, the data is kept in the devices themselves and a server is responsible for aggregating the parameters of the local models. However, the aggregated model may present difficulties in convergence when the device data is non-independent and identically distributed (non-IID), that is, when they present a heterogeneous distribution. Hence, this work considers an application of Human Activity Recognition (HAR) to evaluate FL convergence in different data distributions in comparison with a centralized model. FL models are evaluated in a distributed and centralized way using accuracy as a general performance metric and the F1-Score as a performance metric for each activity. To discuss the FL problem for non-IID data, two strategies were simulated that aim to reduce the asymmetry of device data distribution based on private and public data sharing. The evaluation of these strategies present results that indicate the improvement of the performance of the global model in non-IID scenarios.

About

This work presents FL challenges based on the simulation of a HAR application. As a solution, two data sharing strategies were analyzed in order to reduce the asymmetry of device data distribution.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages