Encrypted weight sharing using federated learning
17th September 2020Proposed by Cagri Ozcinar – cagriozcinar at gmail.com
Abstract: Federated learning approach was built so that we can use data which is distributed at different user devices to train a machine learning algorithm, without the data being actually transferred outside the user device. Thus facilitating the learning of the model and also not hampering the user privacy-the user data never leaves the local device. However recent approaches have shown that private user data can be exploited by using the gradients or the weights that the edge model share with the main server.
In this dissertation, we would like to allow the user devices to share the weights in an encrypted way, the server would do its update on such encrypted weights and then return the updated weights to the user.
References:
- federated learning using a mixture of experts, https://arxiv.org/pdf/2010.02056.pdf
- https://www.forbes.com/sites/marymeehan/2019/11/26/data-privacy-will-be-the-most-important-issue-in-the-next-decade/#3211e2821882
- https://www.nature.com/articles/s42256-020-0186-1
- https://federated.withgoogle.com/#learn
- https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=9146141
- https://ai.googleblog.com/2017/04/federated-learning-collaborative.html
Requirement:
- Basic understanding of Deep-learning,
- Strong Python programming skills with knowledge of PyTorch/TensorFlow tools.
- Ideal candidates should have interest in deep learning in general, and must have the ability to follow new research trends and learn new tools.