Skip to content

Latest commit

 

History

History

ensemble_example

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 
 
 
 
 

Federated Learning with an Ensemble of Models Example

This is an example of training an ensemble of models in a federated manner. Ensembles of models are often leveraged in prediction tasks involving biomedical images to account for the large variation in the input distribution.

In this demo, an ensemble of 3 models is trained on a federated variant of the MNIST data that is heterogenous. The FL server expects three clients to be spun up (i.e. it will wait until three clients report in before starting training). Each client has a modified version of the MNIST dataset. This modification essentially subsamples a certain number from the original training and validation sets of MNIST in order to synthetically induce local variations in the statistical properties of the clients training/validation data. In theory, the models should be able to perform well on their local data while learning from other clients data that has different statistical properties. The proportion of labels at each client is determined by Dirichlet distribution across the classes. The lower the beta parameter is for each class, the higher the degree of the label heterogeneity.

The server has some custom metrics aggregation and uses Federated Averaging as its server-side optimization.

Running the Example

In order to run the example, first ensure you have installed the dependencies in your virtual environment according to the main README and it has been activated.

Starting Server

The next step is to start the server by running

python -m examples.ensemble_example.server  --config_path /path/to/config.yaml

from the FL4Health directory. The following arguments must be present in the specified config file:

  • n_clients: number of clients the server waits for in order to run the FL training
  • local_epochs: number of epochs each client will train for locally
  • batch_size: size of the batches each client will train on
  • n_server_rounds: The number of rounds to run FL

Starting Clients

Once the server has started and logged "FL starting," the next step, in separate terminals, is to start the three clients. This is done by simply running (remembering to activate your environment)

python -m examples.ensemble_example.client --dataset_path /path/to/data

NOTE: The argument dataset_path has two functions, depending on whether the dataset exists locally or not. If the dataset already exists at the path specified, it will be loaded from there. Otherwise, the dataset will be automatically downloaded to the path specified and used in the run.

After all three clients have been started, federated learning should commence.