Journal of information and communication convergence engineering 2018; 16(1): 1-5
Published online March 31, 2018
https://doi.org/10.6109/jicce.2018.16.1.1
© Korea Institute of Information and Communication Engineering
In radio-over-fiber (RoF) systems, nonlinear compensation is very important to meet the error vector magnitude (EVM) requirement of the mobile network standards. In this study, a nonlinear compensation technique based on an artificial neural network (ANN) is proposed for RoF systems. This technique is based on a backpropagation neural network (BPNN) with one hidden layer and three neuron units in this study. The BPNN obtains the inverse response of the system to compensate for nonlinearities. The EVM of the signal is measured by changing the number of neurons and the hidden layers in a RoF system modeled by a measured data. Based on our simulation results, it is concluded that one hidden layer and three neuron units are adequate for the RoF system. Our results showed that the EVMs were improved from 4.027% to 2.605% by using the proposed ANN compensator.
Keywords Artificial neural network, Nonlinear compensation, Radio over fiber
Artificial neural networks (ANNs) are now being used in telecommunications. Proposals have been made to employ neural networks (NNs) as neural de-multiplexers [1] or equalizers in optical communications [2]. In this study, we use an ANN for nonlinear compensation in a radio-over-fiber (RoF) system.
RoF technology is considered a strong candidate for a future front-haul link in mobile networks [3, 4]. The front-haul link is the component of a mobile network between the central digital units and remote units. Although the common public radio interface (CPRI) or open base station architecture initiative (OBSAI) is currently used as the fronthaul link, these technologies cannot support the capacity of future mobile networks [5]. For example, an approximately 120 Gb/s CPRI interface is required to support a remote unit composed of three sectors with two 20-MHz radio channel bandwidths and an 8×8 multiple-input multiple-output (MIMO) scheme. Moreover, to support the massive MIMO scheme proposed in 5G mobile technologies, neither CPRI nor OBSAI is a reasonable solution because several Tb/s links would be required for a remote unit.
RoF technology has thus been proposed to efficiently support increased network capacity [6]. In RoF systems, several analog radio signals are multiplexed using frequency division multiplexing (FDM) and transmitted as an analog optical transmission. Therefore, the signal quality of RoF systems can be easily degraded by nonlinearity, which usually limits their performance and makes it difficult to meet the error vector magnitude (EVM) requirement of mobile communication standards. Consequently, nonlinear compensation is important in RoF systems [7, 8]. In this study, we investigate the use of ANN technology to compensate the nonlinearity in RoF systems.
An ANN is a mathematical model of a biological neuron. The simplest NN is called a perceptron and is illustrated in Fig. 1. The dendrites receive stimuli (input), the cell body processes the stimuli (activation function), and the axon finally transmits the information (output). The perceptron has only two layers: input and output. The input layer receives information; the output layer modifies it according to input weights, biases, and the activation function. The mathematical model of a perceptron is shown in Eq. (1).
where
The MLP weights are generally initialized using a back-propagation algorithm. Backpropagation is similar to a least minimum squares algorithm and requires finding the minimum error function to initialize NN weights (a learning process). This algorithm uses gradient descent to search for the minimum error function in the weight space. The error equation is obtained from the difference of the NN output and the desired output, as shown in Eq. (2). The weight update is carried out in different epochs according to Eq. (3).
The activation function can be chosen independently of the neuron location (i.e., layer). However, as the backpropagation algorithm uses the gradient descent, the activation function plays the important role of guaranteeing the continuity and differentiability of the error function. Hence, it is necessary to use mathematically convenient activation functions, the most popular of which is the sigmoid function [10, 11].
ANN weights represent the connection values between the input and hidden, hidden and hidden, and hidden and output layers. As previously mentioned, the weight update is carried out by the backpropagation algorithm, which works in two steps. First, it calculates forward information, the layer outputs, as shown in Eq. (4). Second, it conducts the gradient descent using the output information, as shown in Eq. (5) [12].
The value of each weight changes from layer to layer and is represented by
Applying the chain rule in Eq. (5), we can obtain the following equation:
From the second term in Eq. (6), the partial derivative of the layer’s output with respect to the weight is a known value, the input of layer
To calculate δ for all layers, it is necessary to find δ in the last layer when
The operation must be performed for all the layer units. Thus, Eq. (8) acquires a new form, which is shown in Eq. (9).
Calculating the partial derivatives, we clearly obtain the relation between the last and previous δ, as shown in Eq. (10).
This equation explicitly demonstrates that information propagates in the backward direction to calculate the previous δ, hence the name ‘backpropagation.’
The nonlinear compensator is a feed-forward NN, as shown in Fig. 2, which uses a backpropagation algorithm for initializing neural weights. In this unidirectional NN, the activation function can be any differentiable function, such as a log-sigmoid transfer function, a linear transfer function, or a tan-sigmoid transfer function. The activation function can be chosen independently of the neuron location. In this work, we created a neural compensator with a tan-sigmoid transfer function (
where
= Input, = Input weights, = Hidden layer weights, = Bias, = Tan-sigmoid transfer function, = Linear transfer function.
Backpropagation training can use various optimization algorithms, such as Levenberg–Marquardt optimization, quasi-Newton backpropagation, or gradient descent. In this work, we use Levenberg–Marquardt optimization to update weights and bias states. This algorithm is faster in reaching the minimum error than others; however, it requires more memory [13].
Fig. 3 illustrates the simulation setup for an ANN compensator in a RoF system. An orthogonal FDM (OFDM) signal is assumed to be the radio signal. The ANN training process is performed offline. Each time the system’s physical structure or transmission configuration varies, the ANN compensator should be trained again and new weights must be determined. During training, the target is obtained from the transmitter block and the input is obtained from the receiver photodiode.
We assumed that the radio signal was an OFDM signal with 64 carriers, 16-quadrature amplitude modulation (QAM), and a central frequency of 2.2 GHz. To emulate laser nonlinearity, we measured the nonlinear response of the laser and inserted this nonlinear function as the distortion block, as shown in Fig. 3.
We investigated the optimal number of neurons in the hidden layer by measuring the radio signal EVM resulting from networks having between one and ten units, as shown in Table 1. From these results, we concluded that three neural units were sufficient for this system. This is because more neural units did not improve EVM, while it required more processing power. We additionally investigated the effect of hidden layers in an ANN compensator. We changed the number of hidden layers from one to three. However, no considerable change in EVM values was observed. Thus, we concluded that one hidden layer was adequate for this ANN compensator.
Number of neuron units | Mean squared error ( | EVM (%) |
---|---|---|
1 | 0.024997 | 3.648 |
2 | 0.024877 | 2.796 |
3 | 0.025153 | 2.605 |
4 | 0.025153 | 2.758 |
5 | 0.025132 | 2.728 |
10 | 0.024816 | 2.669 |
Therefore, we used an ANN compensator with one hidden layer comprising three neural units for the RoF system. In the ANN training process, the ANN finds the minimum mean squared error value. We hence determined that our ANN achieved the minimum value, 0.025153, after just eight epochs. The required time for the training process was approximately 70 seconds.
To investigate the performance of our ANN compensator, we measured the EVM of the received signal with and without it. The results are shown in Table 2. The EVM of the signal is improved from 4.027% to 2.605%, as illustrated in Fig. 4. In Fig. 4, the square points are the ideal QAM points and the circular points are the received points. Signal constellation without the ANN compensator in Fig. 4(a) shows relatively large EVM, whereas the constellation with the ANN compensator in Fig. 4(b) shows relatively small EVM. For example, it can be noticed that the EVM of the mark in Fig. 4(a) was reduced to the mark in Fig. 4(b).
Case | EVM (%) |
---|---|
System with ANN compensator | 2.605 |
System without ANN compensator | 4.027 |
In this work, we proposed an ANN nonlinear compensator for RoF systems. The ANN compensator is a feed-forward supervised NN with three neuron units in one hidden layer whose weights are determined by a backpropagation algorithm. Our results showed that the EVM improved by more than 35% (from 4.027% to 2.605%) by using the proposed ANN compensator. This work is only an initial step toward using an ANN compensator in a RoF system. Nonetheless, we believe that ANN compensators can be widely used for nonlinear compensation in optical fiber communications.
received a B.Sc. degree from the Peruvian University of Applied Sciences, Lima, Peru, in 2010 and an M.S. degree from Kyungsung University, Busan, Korea, in 2016. He is currently a Ph.D. candidate at Gwangju Institute of Science and Technology (GIST). He was awarded the Graña y Montero Peruvian Engineering Research Award, 4th edition. He was employed as a project developer engineer for two years. His research interests include radio over fiber, millimeter wave communication, terahertz wave communications and free-space optical communication.
received B.S., M.S., and Ph.D. degrees in electrical engineering from Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Korea, in 1999, 2001, and 2006, respectively. His main interests during the M.S. and Ph.D. courses included performance monitoring in optical fiber communication systems. From 2006 to 2009, he was a senior engineer in the network R&D center, Samsung Electronics, Suwon, Korea, where he engaged in the research and development of Mobile WiMAX. Since 2009, he has been a faculty member in the department of electronic engineering, Kyungsung University, Busan, Korea. His current research interests include optical fiber communications, mobile communications, visible light communications, and optical power transmission.
Journal of information and communication convergence engineering 2018; 16(1): 1-5
Published online March 31, 2018 https://doi.org/10.6109/jicce.2018.16.1.1
Copyright © Korea Institute of Information and Communication Engineering.
Andres C. Najarro,Sung-Man Kim
Gwangju Institute of Science and Technology,Kyungsung University
In radio-over-fiber (RoF) systems, nonlinear compensation is very important to meet the error vector magnitude (EVM) requirement of the mobile network standards. In this study, a nonlinear compensation technique based on an artificial neural network (ANN) is proposed for RoF systems. This technique is based on a backpropagation neural network (BPNN) with one hidden layer and three neuron units in this study. The BPNN obtains the inverse response of the system to compensate for nonlinearities. The EVM of the signal is measured by changing the number of neurons and the hidden layers in a RoF system modeled by a measured data. Based on our simulation results, it is concluded that one hidden layer and three neuron units are adequate for the RoF system. Our results showed that the EVMs were improved from 4.027% to 2.605% by using the proposed ANN compensator.
Keywords: Artificial neural network, Nonlinear compensation, Radio over fiber
Artificial neural networks (ANNs) are now being used in telecommunications. Proposals have been made to employ neural networks (NNs) as neural de-multiplexers [1] or equalizers in optical communications [2]. In this study, we use an ANN for nonlinear compensation in a radio-over-fiber (RoF) system.
RoF technology is considered a strong candidate for a future front-haul link in mobile networks [3, 4]. The front-haul link is the component of a mobile network between the central digital units and remote units. Although the common public radio interface (CPRI) or open base station architecture initiative (OBSAI) is currently used as the fronthaul link, these technologies cannot support the capacity of future mobile networks [5]. For example, an approximately 120 Gb/s CPRI interface is required to support a remote unit composed of three sectors with two 20-MHz radio channel bandwidths and an 8×8 multiple-input multiple-output (MIMO) scheme. Moreover, to support the massive MIMO scheme proposed in 5G mobile technologies, neither CPRI nor OBSAI is a reasonable solution because several Tb/s links would be required for a remote unit.
RoF technology has thus been proposed to efficiently support increased network capacity [6]. In RoF systems, several analog radio signals are multiplexed using frequency division multiplexing (FDM) and transmitted as an analog optical transmission. Therefore, the signal quality of RoF systems can be easily degraded by nonlinearity, which usually limits their performance and makes it difficult to meet the error vector magnitude (EVM) requirement of mobile communication standards. Consequently, nonlinear compensation is important in RoF systems [7, 8]. In this study, we investigate the use of ANN technology to compensate the nonlinearity in RoF systems.
An ANN is a mathematical model of a biological neuron. The simplest NN is called a perceptron and is illustrated in Fig. 1. The dendrites receive stimuli (input), the cell body processes the stimuli (activation function), and the axon finally transmits the information (output). The perceptron has only two layers: input and output. The input layer receives information; the output layer modifies it according to input weights, biases, and the activation function. The mathematical model of a perceptron is shown in Eq. (1).
where
The MLP weights are generally initialized using a back-propagation algorithm. Backpropagation is similar to a least minimum squares algorithm and requires finding the minimum error function to initialize NN weights (a learning process). This algorithm uses gradient descent to search for the minimum error function in the weight space. The error equation is obtained from the difference of the NN output and the desired output, as shown in Eq. (2). The weight update is carried out in different epochs according to Eq. (3).
The activation function can be chosen independently of the neuron location (i.e., layer). However, as the backpropagation algorithm uses the gradient descent, the activation function plays the important role of guaranteeing the continuity and differentiability of the error function. Hence, it is necessary to use mathematically convenient activation functions, the most popular of which is the sigmoid function [10, 11].
ANN weights represent the connection values between the input and hidden, hidden and hidden, and hidden and output layers. As previously mentioned, the weight update is carried out by the backpropagation algorithm, which works in two steps. First, it calculates forward information, the layer outputs, as shown in Eq. (4). Second, it conducts the gradient descent using the output information, as shown in Eq. (5) [12].
The value of each weight changes from layer to layer and is represented by
Applying the chain rule in Eq. (5), we can obtain the following equation:
From the second term in Eq. (6), the partial derivative of the layer’s output with respect to the weight is a known value, the input of layer
To calculate δ for all layers, it is necessary to find δ in the last layer when
The operation must be performed for all the layer units. Thus, Eq. (8) acquires a new form, which is shown in Eq. (9).
Calculating the partial derivatives, we clearly obtain the relation between the last and previous δ, as shown in Eq. (10).
This equation explicitly demonstrates that information propagates in the backward direction to calculate the previous δ, hence the name ‘backpropagation.’
The nonlinear compensator is a feed-forward NN, as shown in Fig. 2, which uses a backpropagation algorithm for initializing neural weights. In this unidirectional NN, the activation function can be any differentiable function, such as a log-sigmoid transfer function, a linear transfer function, or a tan-sigmoid transfer function. The activation function can be chosen independently of the neuron location. In this work, we created a neural compensator with a tan-sigmoid transfer function (
where
= Input, = Input weights, = Hidden layer weights, = Bias, = Tan-sigmoid transfer function, = Linear transfer function.
Backpropagation training can use various optimization algorithms, such as Levenberg–Marquardt optimization, quasi-Newton backpropagation, or gradient descent. In this work, we use Levenberg–Marquardt optimization to update weights and bias states. This algorithm is faster in reaching the minimum error than others; however, it requires more memory [13].
Fig. 3 illustrates the simulation setup for an ANN compensator in a RoF system. An orthogonal FDM (OFDM) signal is assumed to be the radio signal. The ANN training process is performed offline. Each time the system’s physical structure or transmission configuration varies, the ANN compensator should be trained again and new weights must be determined. During training, the target is obtained from the transmitter block and the input is obtained from the receiver photodiode.
We assumed that the radio signal was an OFDM signal with 64 carriers, 16-quadrature amplitude modulation (QAM), and a central frequency of 2.2 GHz. To emulate laser nonlinearity, we measured the nonlinear response of the laser and inserted this nonlinear function as the distortion block, as shown in Fig. 3.
We investigated the optimal number of neurons in the hidden layer by measuring the radio signal EVM resulting from networks having between one and ten units, as shown in Table 1. From these results, we concluded that three neural units were sufficient for this system. This is because more neural units did not improve EVM, while it required more processing power. We additionally investigated the effect of hidden layers in an ANN compensator. We changed the number of hidden layers from one to three. However, no considerable change in EVM values was observed. Thus, we concluded that one hidden layer was adequate for this ANN compensator.
Number of neuron units | Mean squared error ( | EVM (%) |
---|---|---|
1 | 0.024997 | 3.648 |
2 | 0.024877 | 2.796 |
3 | 0.025153 | 2.605 |
4 | 0.025153 | 2.758 |
5 | 0.025132 | 2.728 |
10 | 0.024816 | 2.669 |
Therefore, we used an ANN compensator with one hidden layer comprising three neural units for the RoF system. In the ANN training process, the ANN finds the minimum mean squared error value. We hence determined that our ANN achieved the minimum value, 0.025153, after just eight epochs. The required time for the training process was approximately 70 seconds.
To investigate the performance of our ANN compensator, we measured the EVM of the received signal with and without it. The results are shown in Table 2. The EVM of the signal is improved from 4.027% to 2.605%, as illustrated in Fig. 4. In Fig. 4, the square points are the ideal QAM points and the circular points are the received points. Signal constellation without the ANN compensator in Fig. 4(a) shows relatively large EVM, whereas the constellation with the ANN compensator in Fig. 4(b) shows relatively small EVM. For example, it can be noticed that the EVM of the mark in Fig. 4(a) was reduced to the mark in Fig. 4(b).
Case | EVM (%) |
---|---|
System with ANN compensator | 2.605 |
System without ANN compensator | 4.027 |
In this work, we proposed an ANN nonlinear compensator for RoF systems. The ANN compensator is a feed-forward supervised NN with three neuron units in one hidden layer whose weights are determined by a backpropagation algorithm. Our results showed that the EVM improved by more than 35% (from 4.027% to 2.605%) by using the proposed ANN compensator. This work is only an initial step toward using an ANN compensator in a RoF system. Nonetheless, we believe that ANN compensators can be widely used for nonlinear compensation in optical fiber communications.
Number of neuron units | Mean squared error ( | EVM (%) |
---|---|---|
1 | 0.024997 | 3.648 |
2 | 0.024877 | 2.796 |
3 | 0.025153 | 2.605 |
4 | 0.025153 | 2.758 |
5 | 0.025132 | 2.728 |
10 | 0.024816 | 2.669 |
Case | EVM (%) |
---|---|
System with ANN compensator | 2.605 |
System without ANN compensator | 4.027 |