PopAi Logo PopAi
|
Your Personal AI Workspace

Cross-Training with Multi-View Knowledge Fusion for Heterogenous Federated Learning

Authors: Zhuang Qi, Lei Meng, Weihao He, Ruohan Zhang, Yu Wang, Xin Qi, Xiangxu Meng
TLDR:
The docament proposes a novel multi-view knowledge-guided cross-training method, FedCT, to address the issue of knowledge forgetting caused by dataset bias in federated learning. FedCT consists of three main modules: consistency-aware knowledge broadcasting, multi-view knowledge-guided representation learning, and mixup-based feature augmentation. It aims to expand the local knowledge base and enhance the consistency of knowledge among different clients. The method is evaluated through extensive experiments on four datasets, demonstrating its effectiveness in mitigating knowledge forgetting and outperforming existing methods. FedCT presents a model-agnostic cross-training strategy that combines cross-training and multi-view knowledge distillation in federated learning, making it a significant contribution to the field.
Free Login To Access AI Capability
Free Access To ChatGPT

The document introduces FedCT, a novel multi-view knowledge-guided cross-training method in federated learning, which aims to expand the local knowledge base and enhance the consistency of knowledge among different clients, addressing the issue of knowledge forgetting caused by dataset bias.

Free Access to ChatGPT

Abstract

Federated learning benefits from cross-training strategies, which enables models to train on data from distinct sources to improve the generalization capability. However, the data heterogeneity between sources may lead models to gradually forget previously acquired knowledge when undergoing cross-training to adapt to new tasks or data sources. We argue that integrating personalized and global knowledge to gather information from multiple perspectives could potentially improve performance. To achieve this goal, this paper presents a novel approach that enhances federated learning through a cross-training scheme incorporating multi-view information. Specifically, the proposed method, termed FedCT, includes three main modules, where the consistency-aware knowledge broadcasting module aims to optimize model assignment strategies, which enhances collaborative advantages between clients and achieves an efficient federated learning process. The multi-view knowledge-guided representation learning module leverages fused prototypical knowledge from both global and local views to enhance the preservation of local knowledge before and after model exchange, as well as to ensure consistency between local and global knowledge. The mixup-based feature augmentation module aggregates rich information to further increase the diversity of feature spaces, which enables the model to better discriminate complex samples. Extensive experiments were conducted on four datasets in terms of performance comparison, ablation study, in-depth analysis and case study. The results demonstrated that FedCT alleviates knowledge forgetting from both local and global views, which enables it outperform state-of-the-art methods.

Method

The paper introduces a novel multi-view knowledge-guided cross-training method, FedCT, which aims to expand the local knowledge base and enhance the consistency of knowledge among different clients in federated learning. FedCT consists of three main modules: consistency-aware knowledge broadcasting, multi-view knowledge-guided representation learning, and mixup-based feature augmentation. It addresses the issue of knowledge forgetting caused by dataset bias and conducts extensive experiments to demonstrate its effectiveness in mitigating knowledge forgetting and expanding the learnable knowledge of local models. Additionally, the paper presents a model-agnostic cross-training strategy and a plug-and-play technique to enhance collaboration benefits between clients and improve communication efficiency in scenarios with heterogeneous data distributions.

Main Finding

The main finding of this paper is the proposal of a novel multi-view knowledge-guided cross-training method, FedCT, which effectively expands the learnable knowledge of local models and mitigates the issue of knowledge forgetting caused by dataset bias in federated learning. The method consists of three main modules: consistency-aware knowledge broadcasting, multi-view knowledge-guided representation learning, and mixup-based feature augmentation, and it demonstrates significant performance improvements in addressing the challenges of data heterogeneity and knowledge preservation in federated learning.

Conclusion

The conclusion of this paper is that the proposed FedCT method effectively mitigates the issue of knowledge forgetting caused by dataset bias in federated learning. Through extensive experiments on multiple datasets, FedCT demonstrates superior performance in expanding the learnable knowledge of local models and alleviating dataset bias from the perspective of representation consistency. Additionally, the paper presents a model-agnostic cross-training strategy and a plug-and-play technique to enhance collaboration benefits between clients in scenarios with heterogeneous data distributions and improve communication efficiency. Overall, the findings highlight the significance of multi-view knowledge-guided cross-training in addressing the challenges of data heterogeneity and knowledge preservation in federated learning.

Keywords

Federated learning, Cross-training, Knowledge forgetting, Prototypical distillation, Non-IID data

Powered By PopAi ChatPDF Feature
The Best AI PDF Reader

Read Paper with AI

PopAi Logo

Cross-Training with Multi-View Knowledge Fusion for Heterogenous Federated Learning

AI Presentation

Chrome Extension

  • One-click PDF Summary One-click PDF Summary
  • Capture and Analyze Charts Capture and Analyze Charts
  • Extract Chart Data Extract Chart Data

Download our apps and chrome extension powered by ChatGPT