Intrusion Detection Systems (IDS) are essential for safeguarding networked environments, yet traditional centralized training approaches for detection models often raise concerns about data privacy, scalability, and adaptability to diverse environments. This research investigates the application of federated deep learning to IDS, enabling multiple entities to collaboratively train robust detection models without sharing raw data.
By distributing model training across participating nodes and aggregating only learned parameters, the proposed approach preserves privacy while leveraging diverse, real-world datasets. The study explores architectures optimized for anomaly and signature-based detection, evaluates performance against evolving cyber threats, and assesses resilience to data heterogeneity and poisoning attacks. The ultimate goal is to demonstrate that federated deep learning can deliver high detection accuracy, strong privacy guarantees, and scalable deployment for modern intrusion detection systems.
Implementation
The implementation will be evaluated in a controlled Mininet simulation environment. Network traffic data will be distributed across multiple virtual nodes to mimic real-world decentralized scenarios. Each node locally trains its deep learning model on its own subset of traffic data, ensuring sensitive information remains at its origin. Model parameters are then securely aggregated at a central server to produce a global detection model.
This deployment approach enables systematic testing of detection performance, communication overhead, and scalability under varying network conditions, while maintaining strict data privacy and simulating realistic operational constraints.