Implementing demo for ML10:2023 Model Poisoning #112
Replies: 2 comments
-
Model poisoning means directly manipulating the model (parameters). What scripts do you have in mind? |
Beta Was this translation helpful? Give feedback.
-
In Federate Learining, we assume the attacker injects fake clients to a federated learning system and sends carefully crafted fake local model updates to the cloud server during training, such that the learnt global model has low accuracy for many indiscriminate test inputs. Towards this goal, our attack drags the global model towards an attacker-chosen base model that has low accuracy. example: OpenFL Will put more ideas w.r.t other ML techniques and Model Poisoning Vector . |
Beta Was this translation helpful? Give feedback.
-
Thinking of ML10:2023 Model Poisoning, we can create two scripts that, although carrying out the same operation (perhaps classification), which provide different outcomes.
By this was, we can showcase model poisoning in action along with the theory corresponding to it.
Please share your ideas with me on this!
cc: @sagarbhure @shsingh @robvanderveer
Beta Was this translation helpful? Give feedback.
All reactions