Abstract
Automatic dependent surveillance-broadcast (ADS-B) has been widely used
due to its low cost and high precision. The deep learning methods for
ADS-B signal classification have achieved a high performance. However,
recent studies have shown that deep learning networks are very sensitive
and vulnerable to small noise. We propose an ADS-B signal poisoning
method based on U-Net. This method can generate poisoned signals. We
assign one of ADS-B signal classification networks as the attacked
network and another one as the protected network. When poisoned signals
are fed into these two well-performed classification networks, the
poisoned signal will recognized incorrectly by the attacked network
while classified correctly by the protected network. We further propose
an Attack-Protect-Similar loss to achieve “triple-win” in leading
attacked network poor performance, protected network well performance
and the poisoned signals similar to unpoisoned signals. Experimental
results show attacked network classifies poisoned signals with a 1.55%
classification accuracy, while the protected network classifies rate is
still maintained at 99.38%.