Journal
MULTIMEDIA TOOLS AND APPLICATIONS
Volume 81, Issue 29, Pages 41899-41910Publisher
SPRINGER
DOI: 10.1007/s11042-021-11473-z
Keywords
Adversarial attack; IoMT; Medical image analysis; Deep learning
Categories
Funding
- Western Norway University Of Applied Sciences
Ask authors/readers for more resources
Collaboration among institutes in the Internet of Medical Things (IoMT) can assist in complex medical and clinical analysis of diseases. This research proposes institutional data collaboration and an adversarial evasion method to enhance the availability of diverse training data and protect sensitive information. The model successfully evades attacks and achieves a high accuracy rating of 95%.
In the Internet of Medical Things (IoMT), collaboration among institutes can help complex medical and clinical analysis of disease. Deep neural networks (DNN) require training models on large, diverse patients to achieve expert clinician-level performance. Clinical studies do not contain diverse patient populations for analysis due to limited availability and scale. DNN models trained on limited datasets are thereby constraining their clinical performance upon deployment at a new hospital. Therefore, there is significant value in increasing the availability of diverse training data. This research proposes institutional data collaboration alongside an adversarial evasion method to keep the data secure. The model uses a federated learning approach to share model weights and gradients. The local model first studies the unlabeled samples classifying them as adversarial or normal. The method then uses a centroid-based clustering technique to cluster the sample images. After that, the model predicts the output of the selected images, and active learning methods are implemented to choose the sub-sample of the human annotation task. The expert within the domain takes the input and confidence score and validates the samples for the model's training. The model re-trains on the new samples and sends the updated weights across the network for collaboration purposes. We use the InceptionV3 and VGG16 model under fabricated inputs for simulating Fast Gradient Signed Method (FGSM) attacks. The model was able to evade attacks and achieve a high accuracy rating of 95%.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available