Zum Hauptinhalt springen

LPI Radar Waveform Recognition Based on Neural Architecture Search

Ma, Zhiyuan ; Yu, Wenting ; et al.
In: Computational Intelligence and Neuroscience, Jg. 2022 (2022)
Online unknown

LPI Radar Waveform Recognition Based on Neural Architecture Search  1. Introduction

In order to reach the intelligent recognition, the deep learning classifiers adopted by radar waveform are normally trained with transfer learning, where the pretrained convolutional neural network on an external large-scale classification dataset (e.g., ImageNet) is used as the backbone. Though transfer learning could effectively avoid overfitting, transferred models are usually redundant and might not generalize well. To eliminate the dependence on transfer learning and achieve high generalization ability, this paper introduced neural architecture search (NAS) to search the suitable classifier of radar waveforms for the first time. Firstly, one of the innovative technologies in NAS called differentiable architecture search (DARTS) was used to design the classifier for 15 kinds of low probability intercept radar waveforms automatically. Then, a method with an auxiliary classifier called flexible-DARTS was proposed. By adding an auxiliary classifier in the middle layer, the flexible-DARTS has a better performance in designing well-generalized classifiers than the standard DARTS. Finally, the performance of the classifier in practical application was compared with related work. Simulation proves that the model based on flexible-DARTS has a better performance, and the accuracy rate for 15 kinds of radar waveforms can reach 79.2% under the −9 dB SNR which proved the effectiveness of the method proposed in this paper for the recognition of radar waveforms.

In modern electronic warfare, the classification of radar waveforms is one of the pivotal technologies in radar countermeasures and reconnaissance systems. It is also an important basis for judging the threat of enemy weapons [[1]]. However, with the application of various new radar systems based on low probability of intercept (LPI) technology, traditional classification could not meet the needs of actual electronic warfare any more.

Researchers convert the waveform into two-dimensional time-frequency image by Choi–Williams distribution (CWD) time-frequency analysis [[3]] or other techniques and then send it to different models to achieve continuous upgrading of recognition capabilities. Due to the specific properties, different machine learning models can have different results even facing the same input [[4]]. Compared with other neural networks [[5]–[7]], the convolutional neural network (CNN) has a better performance in the processing of image, including radar and sonar images, facial images, and hand gesture images [[8]–[10]]. Therefore, it also has been widely used in the recognition of radar waveforms [[7], [11]–[20]].

There are two options for the CNN used in the research. First, according to different tasks, researchers design the CNN [[14]–[17]] independently. Kong et al. [[14]] take 12 kinds of radar waveforms as the target object and then debug the hyperparameters of the CNN repeatedly. After performing a lot of experiments, it achieved a better recognition accuracy than the same period model. However, designing a model from scratch requires researchers to try mistakes or set the parameters randomly. However, the performance may not be satisfactory. In order to avoid the tedious work manually, in recent years, people would like to choose the second option—transferring the CNN [[21]] that have been pretrained on external large-scale classification data (such as ImageNet [[22]]), LeNet [[23]], AlexNet [[23]–[26]], VGGNet [[27]], GoogLeNet [[28]–[30]], ResNet [[28]], DenseNet [[31]], and so on). In the latest study [[31]], researchers transferred the DenseNet as a classifier to reinforce the recognition accuracy in low signal-to-noise ratio (SNR). The accuracy of 8 kinds of waveforms can reach 93.4% at −8 dB SNR. However, Ghadimi et al. [[30]] pointed out that when they tried to transfer GoogLeNet which has been pretrained by almost 12 million images, evaluated by 50,000 images, and tested by 100,000 images, the differences between the pretrained datasets and the target datasets would increase the risk of overfitting too. The author faced the same tedious adjustment work when trying to transfer GoogLeNet for 9 kinds of radar waveforms. It can be seen from Table 1 that researchers have tried many ways to improve the accuracy rate of the radar waveforms. However, most of them only consider the accuracy of the classification algorithm and lack consideration of other performance indicators (such as model build time and misclassification rate) [[35]].

Table 1 The related works in classification of radar waveforms.

TypeMain idea
Improving in preprocessDesigning new features [19, 32]
Improving the TFD algorithm [15, 33]
Make the picture clearer [18, 29]
Improving the classifierStructure expansion [6-7, 16, 25]
Designing the CNN manually [1416]
Replacing the fully connected layer (FC) with other structures [20, 24, 32]
Transferring learning [23-24, 2631, 33-34]

It can be seen that although transfer learning can solve the design problem of the model, there are two important issues that have been ignored. One is that the transferring model does not meet the requirements of transfer learning strictly. It is pretrained on an external large-scale optical image dataset which has a big gap to the radar waveform images obtained through time-frequency transformation; second, in order to have a better fitting ability to the huge dataset, the depth of the model is constantly deepening. However, it may be overfitting when faced to the smaller dataset such as radar waveform images. In general, it may not be the best choice.

To eliminate the dependence on transfer learning and achieve high generalization ability, we introduced the neural architecture search (NAS) [[36]] to the recognition of radar waveforms for the first time to design the classifier automatically. NAS is an algorithm that can automatically learn neural networks. It can design a network from the beginning which has the good performance so that it can be comparable to the expert level in some tasks [[37]]. By comparing the architecture search based on evolutionary algorithms [[38]] or reinforcement learning [[39]], we chose the differentiable neural network architecture search represented by differentiable architecture search (DARTS). DARTS turns the search space into a continuous space, has high search efficiency, and is the fastest search algorithm currently [[40]]. However, due to the approximate solution of the bilevel optimization problem, DARTS also faces difficulties such as unstable search results or performance degradation in the verification stage [[41]]. In recent years, some improvement methods have also been explored [[43]], but the methods were just suited for the specific tasks which could not be used generally. To solve the problem, we proposed a method with an auxiliary classifier (called flexible-DARTS) for architecture search which has the wide range for applications. By adding auxiliary classifiers in different output sizes of features, the improved method not only can reduce the structural difference between the search stage and the verification stage but also the optimization efficiency is higher as the propagation capability of loss value is stronger.

The main contributions of this paper can be summarized as follows:

  • (1). It is the first time to explore the method of improving radar signal waveform classification with the help of NAS
  • (2) To solve the problem of instability shown in DARTS, we propose a new method of architecture search with an auxiliary classifier called flexible-DARTS
  • (3) The two methods are verified on the experimental platform, and the data are compared with the previous research

The main structure of this article is as follows: Section 2 introduces DARTS and the flexible-DARTS proposed in this article and compares the performance of the two. Through experiments in Section 3, the excellent performance of the network architecture based on the flexible-DARTS is introduced, and its practicability to radar waveform recognition is also proved through the improvement of recognition accuracy rate. Finally, the conclusions of this paper are drawn in Section 4.

2. Methods

The concept that using DARTS to design the classifier for radar waveforms is firstly presented in part one. Besides, the inadequacy of DARTS is pointed out in part one too. Then, the flexible-DARTS with an auxiliary classifier proposed in this paper is offered in part two.

2.1. Standard DARTS

DARTS obtains a cell through the training dataset, which is composed of input nodes, intermediate nodes, output nodes, and edges. Suppose each cell has two input nodes and one output node, for the convolutional network, the two input nodes are the output of the first two layers of cells. After multiple trainings, DARTS will form a large network. Hyperparameter can control the number of cells that connect to form the whole network. The whole process can be summarized: Figure 1(a) shows the initial form of the cell in the network, assuming that there are 4 nodes in a cell. In Figure 1(b), all the lines between nodes are connected. Between each two nodes is a mixed candidate operation, and each operation corresponds to a probability value. Figure 1(c) shows that, during the training process, the bilevel optimization problem is solved while optimizing the mixed probability and weight. Figure 1(d) shows that with the largest retention probability, the operation forms the final cell [[40]].

Graph: Figure 1 The process in DARTS search space.

Graph: (b)

Graph: (c)

Graph: (d)

To make the search space continuous, we use softmax to relax the mixed weight of the operation. The specific scheme is detailed in [[40]]. The mixed operation between any set of nodes i,j is weighted by conditional probability as

(1)o¯i,jx=oOexpαoi,joOexpαoi,jox.

The conditional probability weight of the mixed operation is parameterized by the O dimensional vector αi,j . Through the model of formula (1), the problem of architecture search can be simplified to a learning problem of a set of continuous variables α=αi,j . The process of solving the problem is shown in Figure 1. Ltrain represents the training set loss. Lval represents the validating set loss. After the operation is relaxed, the structural parameters α and weight w can be jointly learned. Similar to reinforcement learning or evolutionary algorithms, DARTS regards the performance of the verification set as the final reward or goodness of fit. The goal of DARTS is to minimize the loss of the verification set by using the gradient descent method to optimize.

A two-step method, adjusting w first and then adjusting α , and so on until convergence, is used in DARTS [[40]]. When the structural parameters of the outer layer change, the weight of the inner layer model must be recalculated. This process is very complex. Liu et al.[[40]] proposed an approximation scheme. The specific implementation algorithm for iterative optimization of w and α using gradient descent is shown in Figure 2.

Graph: Figure 2 Workflow for optimizing w and α, using gradient descent.

According to the description, we can find that the updating process is the way to optimize w and α iteratively. The first-order approximation is a gradient descent optimization of the network weight w , whereas the second-order approximation means that when the gradient is updated on α , w is updated again, which makes wα definitely more accurate. In summary, the task of architecture search in DARTS can be summarized in two steps. The first step is to use DARTS for architecture search and to optimize the two types of computing units through the loss of the verification set; the second step is to build a network with optimized computing units, train on the training set from scratch, and validate its performance on the validating set. Although under the premise of gradient optimization, DARTS achieves excellent architecture search performance. However, there are still four problems when using DARTS for architecture search:

  • (1) The search space of the differentiable architecture is insufficient, and the searchable architecture remains simple
  • (2) Search results are unstable and easily affected by the initial values and the learning times
  • (3) The consumption of hardware resources is still high
  • (4) Performance may degrade when the architecture of the search is moved to validation sets

In order to reduce the adverse impact of the above contradictions shown in results, we proposed an improved search algorithm called flexible-DARTS. By adding an auxiliary classifier in the search stage, the flexible-DARTS has a better performance both in searching and validating.

2.2. The Proposal of Flexible DARTS

As the fastest search algorithm up to now, DARTS always consumes a huge memory of the GPU during the search time. Sometimes, the ability of gradient backpropagation might be reduced. We referred to the NASNet experiment. Figure 3 shows the application of the standard DARTS algorithm in the large-scale ImageNet. It could be seen that we needed to design two reduction levels manually to reduce the image size to 56 × 56 before we use the searched cells for classification tasks. When searching, DARTS has 8-level cells and does not set the auxiliary head. However, when it comes to verification, the number of cell levels might be increased to 20 (four intermediate nodes are set up for each level of cell). It is an obvious contradiction.

Graph: Figure 3 The workflow of standard DARTS during validating. The gray part represents the data or feature map, the green part represents the searched reduction unit, the blue part represents the searched standard unit, and the red part represents the hand-designed reduction unit.

It can be seen that when the size of the data goes to be larger or the amount of training data becomes bigger, the performance of DARTS will face a big challenge. The network needs to be deeper to help extract better features, with the difficulty of searching becoming complex. Therefore, DARTS has chosen to adopt the plan which has been used in the GoogleNet called Inception. The problem of vanishing gradients can be solved by outputting additional features in the intermediate stage. It means that, in the architecture validation, the auxiliary classifier is introduced into the two-third level (when the feature map size is 8 × 8). However, in this case, DARTS uses auxiliary classifiers when validating but does not use auxiliary classifiers when searching, which might aggravate the structural difference between searching and validating (it also may reflect in the difference at the number of layers).

From this, we found that the standard DARTS has the following two directions to improve when searching, validating, and transferring to the target dataset. One is to shrink the structural difference during searching and validating, and the other is to reduce the manually designed network architecture. GoogleNet (also known as Inception V1) [[45]] research paper mentions: "On the classification task, the powerful performance of the shallower network shows that the features generated by the middle layer of the network are extremely discriminative." By adding auxiliary classifiers to these middle layers, the discriminative power of the low-stage classifier can be improved, which not only overcomes the problem of gradient disappearance but also realizes the regularization. Therefore, GoogleNet uses a two-level auxiliary classification in the middle layer and adds two losses to overcome the disappearance of the gradient return. This can effectively reduce the disappearance of the gradient (the jump connection in ResNet is used to reduce the gradient explosion). However, experiments show that the influence of the auxiliary network is relatively small (about 0.5). It means that adding an auxiliary classifier during training can achieve the same effect.

According to the abovementioned analysis, we proposed an algorithm, flexible-DARTS, which adopts an auxiliary classifier flexibly in searching time. Because of the manual part in the feature extraction of large-size images in DARTS, we discarded the manual part when facing large-size datasets. The cell architecture searched by flexible-DARTS was adopted in the whole process of feature extraction. In order to adapt to the requirements in architecture searching for the large-size image dataset, different search spaces have been used for the normal group and the reduction group. In addition, auxiliary classifiers have been added in the architecture to narrow the gap between the network architecture during searching and testing. In order to find the architecture with auxiliary classifiers which is the most suitable one for radar waveforms, we compared the performance in classification using different auxiliary classifiers. Three kinds of auxiliary classifier architecture are shown in Figure 4.

Graph: Figure 4 Three kinds of auxiliary classifier architecture. (a) Three auxiliary classifiers. (b) Two auxiliary classifiers. (c) One auxiliary classifier.

Graph: (b)

Graph: (c)

The architecture of the auxiliary classifier is described in Figure 5. It has four layers that contain one average pooling layer, two convolutional layers, and one fully connected layer.

Graph: Figure 5 The architecture of the auxiliary classifier.

3. Experiment

In this section, the experiment is divided into five parts. In part one, radar waveform datasets used in this study is introduced briefly. In part two, the model based on flexible-DARTS is compared with the model based on standard DARTS and the model based on 2CNN3 which is designed manually. In part three, the recognition capability of the model based on flexible-DARTS is discussed with related work. Besides, the confusion matrix is offered in part four to prove the experimental results are compared with Baidu EasyDL.

3.1. Dataset Representation

In the research, we have studied 15 kinds of waveforms, including LFM, NLFM, Costas, BPSK, five polyphase codes (including Frank, P1, P2, P3, and P4 codes), four multitime codes (including T1, T2, T3, and T4 codes), and two composite modulations (LFM/BPSK and 2FSK/BPSK), as shown in Table 2. On the assumption that the received signal would be interfered by the additive white Gaussian noise (AGWN), the carrier frequency has been regarded as the center frequency of the signal bandwidth in this paper. Therefore, the discrete-time sample model of the receiver output signal can be expressed as

(2)yk=xk+wk=akejθk+wk,

where k is the index value that sequentially increases with the sampling interval, xk is the ideal discrete signal after intermediate frequency sampling, wk is AGWN, and ak is the nonzero constant instantaneous signal envelope within the pulse interval. All the simulations in this article assigns ak=1 . θk is the instantaneous phase of the sampled signal, which can be expressed by instantaneous frequency fk and instantaneous phase offset ϕk :

(3)θk=2πfkkTs+ϕk,

where Ts is the sampling interval of the signal. In reality, we usually change the instantaneous frequency (frequency modulation) and instantaneous phase offset (phase modulation) of the signal to form different emission waveforms.

Table 2 15 radar signal waveforms (mostly LPI).

Modulation type

fk

ϕk

LFM

f0+BkTs/τpw

Constant
NLFM

fc+a1kTs+a2kTs2

Constant
CostasfjConstant
BPSKConstant

0,π

FrankConstant

2π/Mi1j1

P1Constant

π/MM2j1j1M+i1

P2Constant

π/2M2i1M2j1M

P3Constant

π/ρi12

P4Constant

π/ρi12πi1

T1Constant

2π/NpsNps/2πmod2π/NpsNsikTsjτpwjNps/τpw,2π

T2Constant

2π/NpsNps/2πmod2π/NpsNsikTsjτpw2jNsi+1/τpwNps/2,2π

T3Constant

2π/NpsNps/2πmod2π/NpsNpsBkTs2/2τpw,2π

T4Constant

2π/NpsNps/2πmod2π/NpsNpsBkTs2/2τpwNpsfckTs/2,2π

LFM-BPSK

fc+B/τpwkTs

0,π

2FSK-BPSK

fci

0,π

1 Note. moda,b is the remainder between a and b. α is the largest integer less than or equal to α . M and ρ are the number of encoding phase, but the difference is that ρ has to take the ability that can open square values. i and j are the iterative integer values from 1 to M . Nps is the number of phase state. Nsi is the number of step frequencies. fc is the fixed carrier frequency value. fn , fm , and fci , respectively, represent different frequency jump sequences of corresponding signals, where n=1,2,...,5 , m=1,2,...,6 , and i=1,2 . τpw is the pulse width. a1 and a2 are constants.

In our research, the original image was converted into an image of size 64×64 by downsampling. On the premise of not losing too much information and meeting the needs of the classifier, we have reduced the consumption of the processor. The radar signals used in this paper were converted by CWD to obtain time-frequency images of fifteen types of signals in a noise-free environment, as shown in Figure 6.

DIAGRAM: Figure 6 CWD time-frequency characteristic diagram of 15 typical radar signals given in Table 2 in a noise-free environment. (a) BPSK. (b) Frank. (c) LFM. (d) NLFM. (e) Costas. (f) P1. (g) P2. (h) P3. (i) P4. (j) T1. (k) T2. (l) T3. (m) T4. (n) LFM-BPSK. (o) 2FSK-BPSK.

Graph: (b)

Graph: (c)

Graph: (d)

Graph: (e)

Graph: (f)

Graph: (g)

Graph: (h)

Graph: (i)

Graph: (j)

Graph: (k)

Graph: (l)

Graph: (m)

Graph: (n)

Graph: (o)

3.2. Searching Results of the Two Algorithms in Methods

The dataset was generated in a simulation with 3 dB steps ranging from −9 dB to 9 dB. 15 kinds of signals had been generated to 800 samples, respectively, in different SNRs. Then, the samples generated above were allocated to the searching data and the validating data at a ratio of 3 : 1. Therefore, the searching data had 63,000 samples. The validating data had 21,000 samples. The model number of the CPU was Intel Xeon E5-2603. The model number of the GPU was Nvidia 1080Ti. The simulation framework was built by using Pytorch160.

Before the formal experiment, we used the three schemes in Figure 4 to search the architecture. After integrating the three indicators of video memory demand, search speed, and evaluation accuracy, the third improved architecture which was added an auxiliary branch architecture at the position of the 8 × 8 feature map shown in Figure 4(c) was finally decided to be used.

The standard DARTS and flexible-DARTS were used to search the architecture for the searching data of the radar waveform. It would be stopped when the validating performance exceeds 99%. The search results on radar waveforms indicate that the FL-DARTS, where the hyperparameter is 1.2 MB, is powerful to design a better generalized classifier, of which the hyperparameter is about one-half of the DARTS-designed classifier where the hyperparameter is 2.3 MB. The performance curves during searching time are shown in Figure 7.

Graph: Figure 7 Performance curves during architecture searching. (a) Standard DARTS. (b) Flexible-DARTS.

Graph: (b)

It can be seen from above figures that the flexible-DARTS is superior to the standard DARTS whether in searching speed or in training stability. The standard DARTS requires 38 epochs to complete the search even results might be very unstable. The flexible-DARTS only requires 17 epochs to complete the search, and the results are obviously stable. The searching results proves that the addition of the auxiliary classifier can enhance the stability of the search time. Besides, it can improve the searching efficiency and help to find the model with excellent performance. Figures 8 and 9 show the cells obtained by the standard DARTS and flexible-DARTS.

Graph: Figure 8 Cells searched on the radar signal dataset by the standard DARTS. (a) The searched normal cell. (b) The searched reduced cell.

Graph: (b)

Graph: Figure 9 Cells searched on the radar signal dataset by the flexible-DARTS using the searched architecture shown in Figure 4(c). (a) The searched normal cell. (b) The searched reduced cell.

Graph: (b)

3.3. Comparison about the Classification Performance

The cells shown in Figures 8 and 9 are the results of the searching part. After searching, it is time for them to be trained by the whole data. The performance during training is shown in Figure 10.

Graph: Figure 10 The performance during architecture training. (a) Standard DARTS. (b) Flexible-DARTS.

Graph: (b)

We used standard DARTS, flexible-DARTS, and the previous research [[46]] (manually designed, represented by 2CNN3 which consists of four convolutional layers, four pooling layers, two fully connected layers, and one dropout layer, stride is 1) for validating. The results are shown in Figure 11.

Graph: Figure 11 The comparison of recognition accuracy rates of the three methods. (a) BPSK. (b) Frank. (c) LFM. (d) NLFM. (e) Costas. (f) P1. (g) P2. (h) P3. (i) P4. (j) T1. (k) T2. (l) T3. (m) T4. (n) LFM-BPSK. (o) 2FSK-BPSK. (p) Overall.

Graph: (b)

Graph: (c)

Graph: (d)

Graph: (e)

Graph: (f)

Graph: (g)

Graph: (h)

Graph: (i)

Graph: (j)

Graph: (k)

Graph: (l)

Graph: (m)

Graph: (n)

Graph: (o)

Graph: (p)

It can be seen from Figure 11 that in terms of overall recognition accuracy rate, the flexible-DARTS is superior to the standard DARTS and 2CNN3. Under the −9 dB SNR, the DARTS with the auxiliary classifier proposed in this paper has a recognition accuracy rate of 79.2% for the 15 kinds of radar waveforms, which is about 5% higher than that of the standard DARTS 74.6% and 2CNN3 (73.5%). Compared with 2CNN3, the DARTS improves its recognition accuracy rate by 1% at −9 dB SNR. For Frank, P1, P3, T2, and LFM-BPSK signals, the recognition accuracy rate of the three shows the same trend as the overall recognition accuracy rate. The introduction of flexible-DARTS is higher than that of the standard DARTS and 2CNN3 which is the lowest. For P4 and T4, the standard DARTS has better performance under low SNR. For T1, T3, and LFM signals, the method of using the automatic search architecture is better than the manually designed network under low SNR. For BPSK, P2, T1, T4, Costas, and NLFM signals, the three methods have similar performance. In general, DARTS with the auxiliary classifier can achieve better recognition performance under low signal-to-noise ratio, which further proves the effectiveness of the method.

3.4. The Confusion Matrix about Radar Signal

Previous research [[18]] found that, even if the network performance is good enough (it means that the network's recognition accuracy rate of the trained dataset had reached to a high level and the recognition accuracy rate to most of the waveforms can reach 99%), there are still some signals that are easily confused. The similarity between the waveforms is high (or the similarity between the converted time-frequency images is high) and the difference of the extracted features is not obvious. Confusion caused by the signal similarity is the main reason for classifier errors. Figure 12 is the confusion matrix of 2CNN3. It can be found that under the training conditions of the dataset in this article, the characteristic images of the P1 signal and the P4 signal are very easily confused signals, and there is also a slight confusion between the T1 signal and the T3 signal. Figure 13 is the confusion matrix of the recognition of each single signal. From the picture we can see that the anticonfused ability of the classifier based on the flexible-DARTS has been improved even in low SNR. The comparison shows that the flexible-DARTS has an excellent performance in improving the recognition of easily confused waveforms. For the easily confused P1 and P4, the recognition effect of P1 has been improved significantly, and the accuracy rate has increased from 84% to 98.5%, nearly 15%. Furthermore, there is no confusion between T1 and T3. However, the recognition accuracy rate for P4 which has only increased from 69% to 71.5% is still not ideal. Therefore, for radar waveform detection under low SNR, it is still necessary to adopt appropriate signal extraction methods to improve the recognition accuracy.

Graph: Figure 12 Confusion matrix of 15 typical radar signal waveforms in 2CNN3 at −3 dB.

Graph: Figure 13 Confusion matrix of 15 typical radar signal waveforms in flexible-DARTS at −3 dB.

3.5. Comparison with Related Networks

Linh et al. [[47]] used the single shot multibox detector (SSD) to generate multiple default candidate boxes to achieve a reasonable selection for the effective pixel area of the time-frequency image. When SSD retains the characteristics of the time-frequency image signal, the invalid pixels are eliminated, so that the results obtained are greatly improved when compared with the concurrent work. The datasets used in the literature [[47]] included 12 kinds of radar waveforms (BPSK, Frank, P1, P2, P3, P4, T1, T2, T3, T4, LFM, and Costas). The same dataset was produced through simulation from −9 dB to 9 dB with 3 dB steps. 12 kinds of signals had been generated to 800 samples, respectively, in different SNRs. The whole dataset has 67200 samples. Therefore, we compared the classification based on flexible-DARTS with the literature [[47]]. The simulation results of the recognition accuracy rate are shown in Figure 14.

Graph: Figure 14 The comparison of recognition accuracy rates at three methods. (a) BPSK. (b) Frank. (c) P1. (d) P2. (e) P3. (f) P4. (g) T1. (h) T2. (i) T3. (j) T4. (k) LFM. (l) Costas. (m) Overall.

Graph: (b)

Graph: (c)

Graph: (d)

Graph: (e)

Graph: (f)

Graph: (g)

Graph: (h)

Graph: (i)

Graph: (j)

Graph: (k)

Graph: (l)

Graph: (m)

It can be seen from Figure 14 that the classifier based on flexible-DARTS (referred to as flexible-DARTS) has a better performance than the SSD method (referred to as SSD) proposed in the literature [[47]]. The accuracy rate of the flexible-DARTS is higher than that of the SSD under each SNR especially under −9 dB SNR, where the overall accuracy rate of the flexible-DARTS which is higher than 80% is about 6% more than that of the SSD. Signal BPSK, Frank, P3, T1, and T2 have the same tendency with the overall accuracy rate. For P1, P2, and T3, although the accuracy rate of the flexible-DARTS is slightly lower at −9 dB SNR than that of the SSD, the performance would exceed significantly to SSD when the SNR is increasing. For P4 and T4, the performance of SSD is better. For LFM and Costas signals, the performance of the two classification networks is equivalent. It can be seen from Figure 14(m) that the overall recognition accuracy rate of the flexible-DARTS is better than that of the SSD.

In addition to comparing with the abovementioned literature, our research was also compared with Baidu EasyDL. EasyDL is a customized AI training and service platform developed by Baidu Brain, which supports a one-stop AI development process from data management and data annotation, model training, and model deployment. Images, text, audio, video, and other data can be published to API, SDK, localized deployment, and software- and hardware-integrated products after EasyDL processing, learning, and deployment. The overall recognition result of EasyDL for the same dataset is shown in Figure 15. In the classification model evaluation report in Figure 15, the top 1–5 refers to the identification of data, and the model will give multiple results according to the level of confidence. Under normal circumstances, the recognition result with the highest confidence level is used, that is, the result of the top 1. As can be seen in the figure, the comprehensive accuracy rate of EasyDL classification results is 95%, which is lower than 95.89% of flexible-DARTS. Also, in the accuracy rate of a single signal, the flexible DARTS has a more excellent performance.

Graph: Figure 15 The EasyDL recognition results. (a) The comprehensive accuracy rate of EasyDL. (b) Main signal accuracy rate.

Graph: (b)

As a common platform, EasyDL can be transferred to solve most of the problems easily we met in our work. But from the results, it can be seen that transferring may not be the best choice when the requirement becomes more precise. As shown in Figure 15, it can be proved that the network obtained through automatic architecture search has more powerful feature extraction capabilities. The model designed for target datasets specifically shows an outstanding advantage even if the space becomes complicated.

4. Conclusion

In order to solve the dependence on transfer learning, this paper introduces neural architecture search into the recognition of radar waveforms, using differentiable architecture search (DARTS) to design the recognition model. Besides, in view of the unstable search results of DARTS and the performance degradation when validating, the difference of the model architecture between the search and validation has been studied. We proposed an optimized algorithm with the auxiliary classifier called flexible-DARTS. After comparing the performance of the multilevel auxiliary classifier by integrating the three indicators of model memory requirements, search speed, and evaluation accuracy, we decided to add an auxiliary classifier when the feature map is 8×8. Compared with the standard DARTS, the flexible-DARTS has an excellent stability when searching the model architecture. Besides, the search time of the flexible-DARTS is cut in half. Furthermore, the flexible-DARTS can help to find a model with powerful capabilities shown by the accuracy rate. The classifier of the 15 radar waveforms searched by the flexible-DARTS is about 5% higher than that of the standard DARTS at −9 dB SNR. In addition, we compared the network with other studies, including 2CNN3 [[46]] and classification based on SSD [[47]] and Baidu EasyDL. From the comprehensive recognition accuracy rate of all the results of 15 radar signals, the method in this paper is better than all of the three. The obvious increase in the resolution proves that the automatic architecture search can obtain a better-performing classifier. This shows that the transfer learning is not the best choice further, and the network matching the dataset obtained through the neural architecture search will have stronger practicality in the future. However, the improvement of the model performance based on the flexible-DARTS only depends on the improvement of the DARTS algorithm itself. It is due to the fact that it cannot find the exact location of the feature extraction, which makes it unable to integrate with other classification algorithms to improve its performance. It leads to a certain restriction on its future use.

Data Availability

Previously reported (python program) data were used to support this study and are available at arXiv:1806.09055.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

REFERENCES 1 Lunden J., Koivunen V. Automatic radar waveform recognition. IEEE Journal of Selected Topics in Signal Processing. 2007; 1(1): 124-136 2 Elint G. W. R. The Interception and Analysis of Radar Signals. 2006: Norwood, MA, USA; Artech House 3 Pace P. E. Detecting and Classifying Low Probability of Intercept Radar. 2009: Norwood, MA, USA; Artech House 4 Sagar R., Jhaveri R., Borrego C. Applications in security and evasions in machine learning: a survey. Electronics. 2020; 9(1): 97, 10.3390/electronics9010097 5 Krizhevsky A., Sutskever I., Hinton G. E. Imagenet classification with deep convolutional neural networks. Advances in Neural Information Processing Systems. 2012; 25, 1097-1105 6 Zhou D. Q., Wang X., Tian Y. R. A novel radar signal recognition method based on a deep restricted Boltzmann machine. Engineering Review. 2017; 37(2): 165-171 7 Zhang M., Diao M., Gao L., Liu L. Neural networks for radar waveform recognition. Symmetry. 2017; 9(5): 75, 10.3390/sym9050075, 2-s2.0-85051270216 8 Dhanamjayulu C., Nizhal U. N., Maddikunta P. K. R., Gadekallu T. R., Iwendi C., Wei C., Xin Q. Identification of malnutrition and prediction of BMI from facial images using real-time image processing and machine learning. IET Image Processing. 2021, 10.1049/ipr2.12222.Sa 9 Gadekallu T. R., Alazab M., Kaluri R., Maddikunta P. K. R., Bhattacharya S., Lakshmanna K., Parimala M. Hand gesture classification using a novel CNN-crow search algorithm. Complex & Intelligent Systems. 2021; 7, 1-14, 10.1007/s40747-021-00324-x Zhang P., Tang J., Zhong H., Ning M., Liu D., Wu K. Self-trained target detection of radar and sonar images using automatic deep learning. IEEE Transactions on Geoscience and Remote Sensing. 2021, 1-14, 10.1109/TGRS.2021.3096011 Chen Y., Jiang H., Li C., Jia X., Ghamisi P. Deep feature extraction and classification of hyperspectral images based on convolutional neural networks. IEEE Transactions on Geoscience and Remote Sensing. 2016; 54(10): 6232-6251, 10.1109/tgrs.2016.2584107, 2-s2.0-84978805819 Wang C., Wang J., Zhang X. D. Automatic radar waveform recognition based on time-frequency analysis and convolutional neural networkProceedings of the 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). March 2017. New Orleans, LA, USA, 2437-2441, 10.1109/icassp.2017.7952594, 2-s2.0-85023774269 Zhang M., Diao M., Guo L. Convolutional neural networks for automatic cognitive radio waveform recognition. IEEE Access. 2017; 5, 11074-11082, 10.1109/access.2017.2716191, 2-s2.0-85023166432 Kong S.-H., Kim M., Hoang L. M., Kim E. Automatic LPI radar waveform recognition using CNN. IEEE Access. 2018; 6, 4207-4219, 10.1109/access.2017.2788942, 2-s2.0-85040029970 Qu Z., Mao X., Deng Z. Radar signal intra-pulse modulation recognition based on convolutional neural network. IEEE Access. 2018; 6, 43874-43884, 10.1109/access.2018.2864347, 2-s2.0-85051374056 Ni X., Wang H., Zhu Y., Meng F. Multi-resolution fusion convolutional neural networks for intrapulse modulation LPI radar waveforms recognition. IEICE Transactions on Communications. 2020; E103.B(12): 1470-1476, 10.1587/transcom.2019ebp3262 Liu Z. R., Shi Y. K., Zeng Y. Radar emitter signal detection with convolutional neural networkProceedings of the 2019 IEEE 11th International Conference on Advanced Infocomm Technology (ICAIT). October 2019. Jinan, China, 48-51, 10.1109/icait.2019.8935926 Ma Z., Huang Z., Lin A., Huang G. LPI radar waveform recognition based on features from multiple images. Sensors. 2020; 20(2): 526-548, 10.3390/s20020526 Ni X., Wang H., Meng F., Hu J., Tong C. LPI radar waveform recognition based on multi-resolution deep feature fusion. IEEE Access. 2021; 9, 26138-26146, 10.1109/access.2021.3058305 Wan J., Yu X., Guo Q. LPI radar waveform recognition based on CNN and TPOT. Symmetry. 2019; 11(5): 725, 10.3390/sym11050725, 2-s2.0-85066338011 Yang Q., Zhang Y., Dai W. Y. Transfer Learning. 2020: Beijing, China; China Machine Press Deng J., Dong W., Socher R., Li Li-J., Li K., Fei-Fei Li. ImageNet: a large-scale hierarchical image databaseProceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition. June 2009. Miami, FL, USA, 248-255, 10.1109/cvpr.2009.5206848 Ghadimi G., Baiderkhani R., Nayebi M. M. Using LeNet-5 and AlexNet architectures in deep learning approach to detect and classify LPI radar signals. Radar. 2019; 7(1): 117-128 Li D., Yang R., Li X., Zhu S. Radar signal modulation recognition based on deep joint learning. IEEE Access. 2020; 8, 48515-48528, 10.1109/access.2020.2978875 Gao L., Zhang X., Gao J., You S. Fusion image based on radar signal feature extraction and modulation recognition. IEEE Access. 2019; 7, 13135-13148, 10.1109/access.2019.2892526, 2-s2.0-85061325361 Guo L. M., Chen X. Low probability of intercept radar signal recognition based on the improved AlexNet modelProceedings of the 2nd International Conference on Digital Signal Processing. February 2018. Tokyo Japan, 119-124 Xiao Y., Liu W., Gao L. Radar signal recognition based on transfer learning and feature fusion. Mobile Networks and Applications. 2020; 25(1), 10.1007/s11036-019-01360-1 Guo Q., Yu X., Ruan G. LPI radar waveform recognition based on deep convolutional neural network transfer learning. Symmetry. 2019; 11(4): 540, 10.3390/sym11040540, 2-s2.0-85065472921 Qu Z., Wang W., Hou C., Hou C. Radar signal intra-pulse modulation recognition based on convolutional denoising autoencoder and deep convolutional neural network. IEEE Access. 2019; 7, 112339-112347, 10.1109/access.2019.2935247 Ghadimi G., Norouzi Y., Bayderkhani R., Nayebi M. M., Karbasi S. M. Deep learning-based approach for low probability of intercept radar signal detection and classification. Journal of Communications Technology and Electronics. 2020; 65(10): 1179-1191, 10.1134/s1064226920100034 Si W., Wan C., Zhang C. Towards an accurate radar waveform recognition algorithm based on dense CNN. Multimedia Tools and Applications. 2021; 80(2): 1779-1792, 10.1007/s11042-020-09490-5 Liu L., Wang S., Zhao Z. Radar waveform recognition based on time-frequency analysis and artificial Bee colony-support vector machine. Electronics. 2018; 7(5): 59-77, 10.3390/electronics7050059, 2-s2.0-85048219703 Qu Z., Hou C., Hou C., Wang W. Radar signal intra-pulse modulation recognition based on convolutional neural network and deep Q-learning network. IEEE Access. 2020; 8, 49125-49136, 10.1109/access.2020.2980363 Wei S., Qu Q., Su H. Intra-pulse modulation radar signal recognition based on Squeeze-and-Excitation networks. Signal, Image and Video Processing. 2020; 14, 1-9, 10.1007/s11760-020-01652-0 Panigrahi R., Borah S., Bhoi A. K., Ijaz M. F., Pramanik M., Jhaveri R. H., Chowdhary C. L. Performance Assessment of supervised classifiers for designing intrusion detection systems: a comprehensive review and recommendations for future research. Mathematics. 2021; 9(6): 690-721, 10.3390/math9060690 Wang J., Zhai X. Dive into AutoML and AutoDL Building Automated Platforms for Machine Learning and Deep Learning. 2019: Beijing, China; China Machine Press Elsken T., Metzen J. H., Hutter F. Neural architecture search: a survey. Journal of Machine Learning Research. 2019; 20(55): 1-21 Baker B., Gupta O., Naik N., Raskar R. Designing neural network architectures using reinforcement learning. 2016, https://arxiv.org/abs/1611.02167 Zoph B., Le Q. V. Neural architecture search with reinforcement learning. 2016, https://arxiv.org/abs/1611.01578 Liu H., Simonyan K., Yang Y. Darts: differentiable architecture search. 2018, https://arxiv.org/abs/1806.09055 Franceschi L., Frasconi P., Salzo S., Grazzi R., Pontil M. Bilevel programming for hyperparameter optimization and meta-learningProceedings of the International Conference on Machine Learning. September 2018. Ha Noi Viet Nam Finn C., Abbeel P., Levine S. Model-agnostic meta-learning for fast adaptation of deep networksProceedings of the International Conference on Machine Learning. February 2017. Sydney NSW Australia Shin R., Packer C., Song D. Differentiable neural network architecture searchProceedings of the Workshop Track-ICLR. April 2018. Vancouver, BC, Canada Xie S., Zheng H., Liu C., Lin L. SNAS: stochastic neural architecture search. 2018, https://arxiv.org/abs/1812.09926 Szegedy C., Liu W., Jia Y. Going deeper with convolutionsProceedings of the IEEE Conference on Computer Vision and Pattern Recognition. June 2015. San Juan, PR, USA, 1-9, 10.1109/cvpr.2015.7298594, 2-s2.0-84937522268 Huang Z., Ma Z., Huang G. Radar waveform recognition based on multiple autocorrelation images. IEEE Access. 2019; 7, 98653-98668, 10.1109/access.2019.2930250 Linh M. H., Kim M., Kong S. Automatic recognition of general LPI radar waveform using SSD and supplementary classifier. IEEE Transactions on Signal Processing. 2019; 13(67): 3516-3530

By Zhiyuan Ma; Wenting Yu; Peng Zhang; Zhi Huang; Anni Lin and Yan Xia

Reported by Author; Author; Author; Author; Author; Author

Titel:
LPI Radar Waveform Recognition Based on Neural Architecture Search
Autor/in / Beteiligte Person: Ma, Zhiyuan ; Yu, Wenting ; Zhang, Peng ; Huang, Zhi ; Lin, Anni ; Xia, Yan
Link:
Zeitschrift: Computational Intelligence and Neuroscience, Jg. 2022 (2022)
Veröffentlichung: Hindawi Limited, 2022
Medientyp: unknown
ISSN: 1687-5273 (print)
Schlagwort:
  • Radar
  • Article Subject
  • General Computer Science
  • General Mathematics
  • General Neuroscience
  • Computer applications to medicine. Medical informatics
  • R858-859.7
  • Computer Simulation
  • Neurosciences. Biological psychiatry. Neuropsychiatry
  • Neural Networks, Computer
  • General Medicine
  • Research Article
  • RC321-571
Sonstiges:
  • Nachgewiesen in: OpenAIRE
  • Sprachen: English
  • File Description: text/xhtml
  • Language: English
  • Rights: OPEN

Klicken Sie ein Format an und speichern Sie dann die Daten oder geben Sie eine Empfänger-Adresse ein und lassen Sie sich per Email zusenden.

oder
oder

Wählen Sie das für Sie passende Zitationsformat und kopieren Sie es dann in die Zwischenablage, lassen es sich per Mail zusenden oder speichern es als PDF-Datei.

oder
oder

Bitte prüfen Sie, ob die Zitation formal korrekt ist, bevor Sie sie in einer Arbeit verwenden. Benutzen Sie gegebenenfalls den "Exportieren"-Dialog, wenn Sie ein Literaturverwaltungsprogramm verwenden und die Zitat-Angaben selbst formatieren wollen.

xs 0 - 576
sm 576 - 768
md 768 - 992
lg 992 - 1200
xl 1200 - 1366
xxl 1366 -