


Volume 20 No 10 (2022)
Download PDF
A Privacy Preserving Approach for Mitigating Data Poisoning Attack in Federated Learning
Kaushik S , GreeshmaSarath
Abstract
As artificial and machine learning technology grows and integrates with our products and services for
better per- formance and enhance the experience, with the great success of these technologies. Data
privacy concerns have become a topic that needs to be addressed. With Companies and institutes trying
to use such technology to help us solve real-world problems and help mankind with AI that can assist in
the medical field to AI that can help moderate traffic in our busy streets the one thing that we need in
common across different situations is the correct data which will help us to creates the said intelligence to
helps us. But to archive such a state we have to be in the position of collaboration with our fellow people
to help each other to develop. A zero trust system has to be created so that we all can benefit from
technological advancements.
Regarding the use of ML and AI in the heterogeneous system, we need to be able to share the intelligence
that a said party or organization has acquired with others without sharing the data a key thing. To help
with that a concept called federated learning has been introduced which will help them share the
acquired intelligence. Although the method is a far safer alternative than sharing the data but still has
some pitfalls that need to be taken care of. One such problem is the creation of false intelligence
(poisoning the data from which the intelligence is derived) and that being shared with others and being
used to develop an AI or ML system, this means the created system now has the false data intelligence
which may reduce it overall usefulness or some time may completely be unusable. Since these
heterogeneous systems like the internet, these systems are implemented we cannot know the
creditability of the source of the intelligence (ML or AI model). This paper proposes a security firewall
framework that will help the developers of such a system to create defense protocols to better defend
the system from such data poisoning attacks..
Keywords
Federated learning, Model analysis, Model poi- sonousdefence strategy
Copyright
Copyright © Neuroquantology
Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Articles published in the Neuroquantology are available under Creative Commons Attribution Non-Commercial No Derivatives Licence (CC BY-NC-ND 4.0). Authors retain copyright in their work and grant IJECSE right of first publication under CC BY-NC-ND 4.0. Users have the right to read, download, copy, distribute, print, search, or link to the full texts of articles in this journal, and to use them for any other lawful purpose.