

What Is Federated Learning?
One concern with training traditional machine learning models is the large amount of data involved—especially when that data is particularly sensitive to users. So, how can we keep users’ privacy top-of-mind while still training ML models quickly and accurately?
The answer: federated learning.
Federated learning involves training an ML model on user information without having to transfer that information to cloud-based servers. Also known as collaborative learning, federated learning trains an algorithm across several decentralized edge devices that hold local data without exchanging these datasets. As a result, it solves those previously mentioned privacy concerns while training models locally (on-device) much quicker and more accurately.
This method allows multiple users to create a centralized, robust, and precise ML model without sharing private user data. Consequently, it helps address crucial issues such as data confidentiality, data access rights, data protection, and access to heterogeneous data.
One prime example of federated learning is detecting and measuring credit risk for financial institutions. Finance is a highly regulated industry dealing with lots of sensitive customer information, and federated learning techniques allow financial organizations to analyze credit scores and prevent fraudulent activity without directly accessing any of that private data.
How Does Federated Learning Work?
The process has quite a few moving parts. Here’s a quick overview of how it works:
- A base machine learning model is created in the cloud server, which is either trained on publicly accessible information or hasn’t been trained at all.
- Several user devices download the base model so that a local copy of the centralized ML model is created on all devices.
- These devices then train their local models on local datasets.
- After training, the devices send back the training results to the central server, though the users’ data remains encrypted.
- On the cloud server, the training results are aggregated, and the centralized machine learning model is updated.
- The users receive the updated model built from their own datasets.
The ultimate goal of federated learning is precision without risking user privacy. Bear in mind that this process can go through multiple iterations before the model reaches the desired level of accuracy.
Key Benefits of Federated Learning
Federated learning trains models on decentralized data. The resulting models have been trained on a variety of data and have minimal latencies, all without compromising the privacy of the users who helped train them. Let’s take a look at some more advantages:
1. It’s collaborative in nature.
Federated learning allows devices such as mobile phones to learn a shared prediction model together. This approach keeps the training data on the device rather than necessitating the data to be uploaded and stored on a central server.
2. It saves time.
Organizations can team up to solve sticking points with traditional ML models. For example, highly regulated entities like hospitals can train a potentially life-saving ML model in a collaborative effort while maintaining patient privacy and developing results more quickly. There’s no need to spend time collecting and aggregating data from diverse sources each time.
3. It’s secure.
Federated learning leverages secure aggregation to keep client updates private. As a result, the server can’t determine the value or source of the individual model updates that the users provide. This reduces the likelihood of inference and data attribution attacks.
As personal data remains local, it provides security benefits to organizations with strict privacy regulations, like financial institutions and hospitals. It eases the burden of aggregating data on a central and external server, making the data less susceptible to breaches.
4. It involves more diverse data.
Federated learning lends itself to more data diversity, as the centralized model is continuously learning from different organizations and populations, rather than one dataset with a potentially skewed demographic. This results in a more representative and inclusive model.
For instance, in the healthcare industry, federated learning algorithms are trained across several hospitals located in different geographical areas. Consequently, the dataset delivered contributes to the formation of well-rounded models, as this data comes from patients that vary in age, ethnicity, gender, and physical attributes, among others.
5. It yields real-time predictions.
With federated learning, the predictions happen in real-time, on the device itself. This decreases the time lag that happens when raw data is transferred back to a central server and then returns the outcomes to the device. As the models are downloaded to the device directly, the prediction process works even in the absence of internet connectivity.
6. It’s hands-off and non-invasive.
Federated learning doesn’t drain the battery life of the devices that participate in the training process. In fact, devices only take part in the training when users aren’t using them. This means training can occur while your phone is being charged, idle, or in do not disturb mode.
Common Challenges of Federated Learning
While federated learning opens the door to more collaborative machine learning (with user privacy at its core), it also comes with its own unique set of challenges, including:
1. It’s not communication-efficient.
Communication is a serious bottleneck in federated learning networks where data created on every device remains local.
To train a model using device-generated data, there’s a need to develop techniques that support efficient communication, decreasing the total number of communication rounds. It should also iteratively send small model updates as part of the training process, rather than sending the entire data set.
2. There are heterogeneous systems involved.
The user devices participating in the training process may significantly differ in terms of storage, computational ability, power supply, and network connectivity capabilities—for example, some edge devices often have connectivity or energy limitations.
Therefore, the approach must ensure fault tolerance as user devices may drop out before finishing the training cycle. It should anticipate low participation levels, withstand heterogeneous hardware, and be resilient against participating devices that drop out.
3. There are limitations on data cleaning & labeling.
While the privacy aspect of federated learning is a major plus, it also means that data engineers don’t have access to raw user data. So, they can’t clean the data to identify missing values, remove irrelevant aspects, and determine the data points on which the model should actually be trained.
As contributors to federated learning aren’t expected to label their data either, it’s also best to use federated learning on unsupervised algorithms where the outcome can be determined based on user behavior, rather than rigidly labeled training data.
The Future of Federated Learning
Federated learning offers great opportunity for ML models to retain their accuracy without risking user privacy. Its applications, from everyday instances of better predictive text to more critical use cases of improving hospital data, are just scratching the surface.
Intel has been working on implementing federated learning in the medical imaging space, while the EU recently funded a paper on the potential for federated learning to assist with drug discovery visualization. In the next few years, we’re likely to see federated learning establish a major competitive, technological, and scientific advantage for organizations implementing it.
If you’re interested in learning more about how machine learning projects can directly impact your organization, check out our Human’s Guide to Machine Learning. Inside, we detail how to get an ML project off the ground so your data science teams can make real business impact.