Abstract
Spectrum sharing in the next generation (NextG) wireless communication systems can solve the spectrum scarcity problem, meet the demands of high data rate, and provide better quality of service. Characterizing surrounding wireless signals is essential to support such a spectrum sharing over the same spectrum band. Using machine learning to identify different signals on the same channel can solve this issue and lead to real time signal de-modulation of an unknown signal class. Using existing machine learning techniques such as convolution neural networks (CNN), Region-based convolution neural networks (R-CNN),fast region-based convolution neural networks (Fast R-CNN) and faster region-based convolution neural networks (Faster R-CNN), one or more signals can be classified and extracted from a channel at the same time. This project evaluates the image pre-processing methods to better format signal data for machine learning, including machine learning with time do-main signals, using spectrograms which maps frequency and time components of the signal, and scalograms which maps the continuous wavelet transform to time. These models are then built in TensorFlow and compiled to run on a CPU, GPU, and a Xilinix FPGA usingVitis-AI, to evaluate models for accuracy and inference time. After evaluating the models, it is found that the spectrogram frames are not an ideal image pre-processing technique for this application, with possible upside with higher frequency signals. The continuous wavelet transform performed in the scalogram frame amounts to an excellent prepossessing technique. Region-based convolution neural networks prove to offer a reduction in classification time with a significant reduction in classification accuracy, A better method for reducing classification time is FPGA acceleration. As it provides a tenfold increase in classification speed while retaining the exact same accuracy when compared to the same neural network running on a mid-range GPU.