Skip to content

Latest commit

 

History

History
58 lines (50 loc) · 4.69 KB

File metadata and controls

58 lines (50 loc) · 4.69 KB

FedVision

FedVison dataset is created jointly by WeBank and ExtremeVision to facilitate the advancement of academic research and industrial applications of federated learning.

The FedVision project

  • Provides images data sets with standardized annotation for federated object detection.
  • Provides key statistics and systems metrics of the data sets.
  • Provides a set of implementations of baseline for further research.

Datasets

We introduce two realistic federated datasets.

  • Federated Street, a real-world object detection dataset that annotates images captured by a set of street cameras based on object present in them, including 7 classes. In this dataset, each or every few cameras serve as a device.
Dataset Number of devices Total samples Number of class
Federated Street 5, 20 956 7

File descriptions

  • Street_Dataset.tar contains the image data and ground truth for the train and test set of the street data set.
    • Images: The directory which contains the train and test image data.
    • train_label.json: The annotations file is saved in json format. train_label.json is a list, which contains the annotation information of the Images set. The length of list is the same as the number of image and each value in the list represents one image_info. Each image_info is in format of dictionary with keys and values. The keys of image_info are image_id, device1_id, device2_id and items. We split the street data set in two ways. For the first, we split the data into 5 parts according to the geographic information. Besides, we turn 5 into 20. Therefore we have device1_id and device2_id. It means that we have 5 or 20 devices. items is a list, which may contain multiple objects.
      [
        {
         "image_id": the id of the train image, for example 009579.
         "device1_id": the id of device1 ,specifies which device the image is on.
         "device2_id": the id of device2.
         "items": [
          {
           "class": the class of one object,
           "bbox": ["xmin", "ymin", "xmax", "ymax"], the coordinates of a bounding box
          },
          ...
          ]
        },
        ...
      ]
    • test_label.json: The annotations of test data are almost the same as of the train_label.json. The only difference between them is that the image_info of test data does not have the key device_id.

Evaluation

We use he standard PASCAL VOC 2010 mean Average Precision (mAP) for evaluation (mean is taken over per-class APs).
To be considered a correct detection, the overlap ratio avatar between the predicted bounding box avatar and ground truth bounding avatar by the formula

when denotes the intersection of the predicted and ground truth bounding boxes and their union. Average Precision is calculated for each class respectively.

where n is the number of total object in given class. For $k$ classes, mAP is the average of APs.