Machine reading comprehension (MRC) of text data is a complex NLP problem with a lot of ongoing research fueled by the release of the Stanford Question Answering Dataset (SQuAD). It is considered to be an effort to teach computers how to “understand” a text, and then to be able to answer questions about it using deep learning. However, until now large-scale training on private text data and knowledge sharing has been missing for this NLP task.
We here at Integrative Scalable Computing Laboratory have developed FedQAS, a prototype privacy-preserving machine reading system capable of leveraging large-scale private data without the need to pool those datasets in a central location. The proposed approach combines transformer models and federated learning.
The system is developed using the FEDn framework and deployed as a proof-of-concept alliance initiative. FedQAS is flexible, language-agnostic, and allows intuitive participation and execution of local model training.
Read more about the architecture and system evaluation in our paper: https://arxiv.org/abs/2202.04742
Github:https://github.com/aitmlouk/FEDn-client-FedQAS-tfFEDn: https://github.com/scaleoutsystems/fedn