Search 닫기

Journal of information and communication convergence engineering 2018; 16(1): 1-5

Published online March 31, 2018

https://doi.org/10.6109/jicce.2018.16.1.1

© Korea Institute of Information and Communication Engineering

Nonlinear Compensation Using Artificial Neural Network in Radio-over-Fiber System

Andres C. Najarro,Sung-Man Kim

Gwangju Institute of Science and Technology,Kyungsung University

Received: April 29, 2017; Accepted: January 2, 2018

In radio-over-fiber (RoF) systems, nonlinear compensation is very important to meet the error vector magnitude (EVM) requirement of the mobile network standards. In this study, a nonlinear compensation technique based on an artificial neural network (ANN) is proposed for RoF systems. This technique is based on a backpropagation neural network (BPNN) with one hidden layer and three neuron units in this study. The BPNN obtains the inverse response of the system to compensate for nonlinearities. The EVM of the signal is measured by changing the number of neurons and the hidden layers in a RoF system modeled by a measured data. Based on our simulation results, it is concluded that one hidden layer and three neuron units are adequate for the RoF system. Our results showed that the EVMs were improved from 4.027% to 2.605% by using the proposed ANN compensator.

Keywords Artificial neural network, Nonlinear compensation, Radio over fiber

Artificial neural networks (ANNs) are now being used in telecommunications. Proposals have been made to employ neural networks (NNs) as neural de-multiplexers [1] or equalizers in optical communications [2]. In this study, we use an ANN for nonlinear compensation in a radio-over-fiber (RoF) system.

RoF technology is considered a strong candidate for a future front-haul link in mobile networks [3, 4]. The front-haul link is the component of a mobile network between the central digital units and remote units. Although the common public radio interface (CPRI) or open base station architecture initiative (OBSAI) is currently used as the fronthaul link, these technologies cannot support the capacity of future mobile networks [5]. For example, an approximately 120 Gb/s CPRI interface is required to support a remote unit composed of three sectors with two 20-MHz radio channel bandwidths and an 8×8 multiple-input multiple-output (MIMO) scheme. Moreover, to support the massive MIMO scheme proposed in 5G mobile technologies, neither CPRI nor OBSAI is a reasonable solution because several Tb/s links would be required for a remote unit.

RoF technology has thus been proposed to efficiently support increased network capacity [6]. In RoF systems, several analog radio signals are multiplexed using frequency division multiplexing (FDM) and transmitted as an analog optical transmission. Therefore, the signal quality of RoF systems can be easily degraded by nonlinearity, which usually limits their performance and makes it difficult to meet the error vector magnitude (EVM) requirement of mobile communication standards. Consequently, nonlinear compensation is important in RoF systems [7, 8]. In this study, we investigate the use of ANN technology to compensate the nonlinearity in RoF systems.

An ANN is a mathematical model of a biological neuron. The simplest NN is called a perceptron and is illustrated in Fig. 1. The dendrites receive stimuli (input), the cell body processes the stimuli (activation function), and the axon finally transmits the information (output). The perceptron has only two layers: input and output. The input layer receives information; the output layer modifies it according to input weights, biases, and the activation function. The mathematical model of a perceptron is shown in Eq. (1).

Fig. 1. The neural network: (a) biological neuron and (b) its computational model.

y(x)=f(i=1nxibi)=1 ,

where xi, wi and bi denote NN inputs, weights, and bias, respectively. This single-layer perceptron cannot be used to solve complex problems. A multilayer perceptron (MLP), in contrast to a single-layer perceptron, has more than two layers and can solve complex problems. The layers between the input and output layers are called hidden layers. With just one hidden layer, the MLP can be a powerful computation tool [9].

The MLP weights are generally initialized using a back-propagation algorithm. Backpropagation is similar to a least minimum squares algorithm and requires finding the minimum error function to initialize NN weights (a learning process). This algorithm uses gradient descent to search for the minimum error function in the weight space. The error equation is obtained from the difference of the NN output and the desired output, as shown in Eq. (2). The weight update is carried out in different epochs according to Eq. (3).

e=(yyd)2 ,

w(t+1)=w(t)+η(e)(w).

The activation function can be chosen independently of the neuron location (i.e., layer). However, as the backpropagation algorithm uses the gradient descent, the activation function plays the important role of guaranteeing the continuity and differentiability of the error function. Hence, it is necessary to use mathematically convenient activation functions, the most popular of which is the sigmoid function [10, 11].

ANN weights represent the connection values between the input and hidden, hidden and hidden, and hidden and output layers. As previously mentioned, the weight update is carried out by the backpropagation algorithm, which works in two steps. First, it calculates forward information, the layer outputs, as shown in Eq. (4). Second, it conducts the gradient descent using the output information, as shown in Eq. (5) [12].

xj(l)=f(sj(l))=f(i=0d(l1)wij(l)xi(l1)) ,

e(w)=e(w)wij(l).

The value of each weight changes from layer to layer and is represented by wijl, where l represents the layer’s index and can take values from 1 ≤ lL. Here, L is the index of the NN output layer; i and j represent the input and output indices and can take values from 0 ≤ in(l-1) and 1 ≤ jn(l), respectively; and n represents the number of neuron units per layer.

Applying the chain rule in Eq. (5), we can obtain the following equation:

e(w)wij(l)=e(w)sj(l)sj(l)wij(l).

From the second term in Eq. (6), the partial derivative of the layer’s output with respect to the weight is a known value, the input of layer (xi(l1)). The unique unknown term is the partial derivative of the error with respect to the layer’s output, as shown in Eq. (7).

δjl=e(w)sj(l).

To calculate δ for all layers, it is necessary to find δ in the last layer when l = L because calculating the last δ enables the determination of previous ones. In the last layer, δ is obtained by simply forwarding the information from the input to the output. The main difficulty is to obtain δ in the previous layers. However, if the chain rule is applied, a new and easier relation can be found, as shown in Eq. (8).

δil1=e(w)si(l1)=e(w)sj(l)sj(l)xi(l1)xi(l1)si(l1).

The operation must be performed for all the layer units. Thus, Eq. (8) acquires a new form, which is shown in Eq. (9).

δil1=j=1n(l)e(w)sj(l)sj(l)xi(l1)xi(l1)si(l1),

Calculating the partial derivatives, we clearly obtain the relation between the last and previous δ, as shown in Eq. (10).

δil1=j=1n(l)δjlwij(l)f'(si(l)).

This equation explicitly demonstrates that information propagates in the backward direction to calculate the previous δ, hence the name ‘backpropagation.’

The nonlinear compensator is a feed-forward NN, as shown in Fig. 2, which uses a backpropagation algorithm for initializing neural weights. In this unidirectional NN, the activation function can be any differentiable function, such as a log-sigmoid transfer function, a linear transfer function, or a tan-sigmoid transfer function. The activation function can be chosen independently of the neuron location. In this work, we created a neural compensator with a tan-sigmoid transfer function (f1(x)) in the hidden layer and a linear transfer function (f2(x)) in the output layer. The output of this NN can be mathematically represented as in Eq. (11).

Fig. 2. Structure of an ANN.

y(x)=f2(f1(x×IW+b1)×HW+b2) ,

where

I

= Input,

IW

= Input weights,

HW

= Hidden layer weights,

b1, b2

= Bias,

f1

= Tan-sigmoid transfer function,

f2

= Linear transfer function.

Backpropagation training can use various optimization algorithms, such as Levenberg–Marquardt optimization, quasi-Newton backpropagation, or gradient descent. In this work, we use Levenberg–Marquardt optimization to update weights and bias states. This algorithm is faster in reaching the minimum error than others; however, it requires more memory [13].

Fig. 3 illustrates the simulation setup for an ANN compensator in a RoF system. An orthogonal FDM (OFDM) signal is assumed to be the radio signal. The ANN training process is performed offline. Each time the system’s physical structure or transmission configuration varies, the ANN compensator should be trained again and new weights must be determined. During training, the target is obtained from the transmitter block and the input is obtained from the receiver photodiode.

We assumed that the radio signal was an OFDM signal with 64 carriers, 16-quadrature amplitude modulation (QAM), and a central frequency of 2.2 GHz. To emulate laser nonlinearity, we measured the nonlinear response of the laser and inserted this nonlinear function as the distortion block, as shown in Fig. 3.

Fig. 3. Simulation setup for the ANN compensator in the RoF system.

We investigated the optimal number of neurons in the hidden layer by measuring the radio signal EVM resulting from networks having between one and ten units, as shown in Table 1. From these results, we concluded that three neural units were sufficient for this system. This is because more neural units did not improve EVM, while it required more processing power. We additionally investigated the effect of hidden layers in an ANN compensator. We changed the number of hidden layers from one to three. However, no considerable change in EVM values was observed. Thus, we concluded that one hidden layer was adequate for this ANN compensator.

EVM results according to number of hidden layer neural units
Number of neuron unitsMean squared error (ETrain)EVM (%)
10.0249973.648
20.0248772.796
30.0251532.605
40.0251532.758
50.0251322.728
100.0248162.669

Therefore, we used an ANN compensator with one hidden layer comprising three neural units for the RoF system. In the ANN training process, the ANN finds the minimum mean squared error value. We hence determined that our ANN achieved the minimum value, 0.025153, after just eight epochs. The required time for the training process was approximately 70 seconds.

To investigate the performance of our ANN compensator, we measured the EVM of the received signal with and without it. The results are shown in Table 2. The EVM of the signal is improved from 4.027% to 2.605%, as illustrated in Fig. 4. In Fig. 4, the square points are the ideal QAM points and the circular points are the received points. Signal constellation without the ANN compensator in Fig. 4(a) shows relatively large EVM, whereas the constellation with the ANN compensator in Fig. 4(b) shows relatively small EVM. For example, it can be noticed that the EVM of the mark in Fig. 4(a) was reduced to the mark in Fig. 4(b).

System EVM with and without the ANN compensator
CaseEVM (%)
  System with ANN compensator2.605
  System without ANN compensator4.027

Fig. 4. Signal constellation (a) without and (b) with the ANN compensator. EVM of the signal is improved from (a) 4.027% to (b) 2.605%.

In this work, we proposed an ANN nonlinear compensator for RoF systems. The ANN compensator is a feed-forward supervised NN with three neuron units in one hidden layer whose weights are determined by a backpropagation algorithm. Our results showed that the EVM improved by more than 35% (from 4.027% to 2.605%) by using the proposed ANN compensator. This work is only an initial step toward using an ANN compensator in a RoF system. Nonetheless, we believe that ANN compensators can be widely used for nonlinear compensation in optical fiber communications.

This research was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (No. NRF-2015R1C1A1A01052543).
  1. T. Matsumoto, M. Koga, K. Noguchi, and S. Aizawa, “Proposal for neural-network applications to fiber-optic transmission,” in Proceedings of the International Joint Conference on Neural Networks, San Diego, CA, pp. 75-80, 1990. DOI: 10.1109/IJCNN.1990.137549.
    CrossRef
  2. T. F. B. de Sousa and M. A. C. Fernandes, “Multilayer perceptron equalizer for optical communication systems,” in Proceedings of the 2013 SBMO/IEEE MTT-S International Microwave & Optoelectronics Conference, Rio de Janeiro, Brazil, pp. 1-5, 2013. DOI: 10.1109/IMOC.2013.6646479.
    CrossRef
  3. R. Abdolee, R. Ngah, V. Vakilian, and T. A. Rahman, “Application of radio-over-fiber (ROF) in mobile communication,” in Proceedings of the Asia-Pacific Conference on Applied Electromagnetics, Melaka, Malaysia, pp. 1-5, 2007. DOI: 10.1109/APACE.2007.4603945.
    CrossRef
  4. M. Xu, F. Lu, J. Wang, L. Cheng, D. Guidotti, and G. K. Chang, “Key technologies for next-generation digital RoF mobile front-haul with statistical data compression and multiband modulation,” Journal of Lightwave Technology, vol. 35, no. 17, pp. 3671-3679, 2017. DOI: 10.1109/jlt.2017.2715003.
    CrossRef
  5. S. M. Kim, “Limits of digital unit-remote radio unit distance and cell coverage induced by time division duplex profile in mobile WiMAX systems,” International Journal of Communication Systems, vol. 26, no. 2, pp. 250-258, 2013. DOI: 10.1002/dac.1356.
    CrossRef
  6. S. H. Cho, H. Park, H. S. Chung, K. H. Doo, S. S. Lee, & J. H. Lee, “Cost-effective next generation mobile fronthaul architecture with multi-IF carrier transmission scheme,” in Proceedings of the Optical Fiber Communications Conference and Exhibition, San Francisco, CA, 2014. DOI: 10.1364/OFC.2014.Tu2B.6.
    CrossRef
  7. S. M. Kim, “Nonlinearity detection and compensation in radio over fiber systems using a monitoring channel,” Journal of Information and Communication Convergence Engineering, vol. 13, no. 3, pp. 167-171, 2015. DOI: 10.6109/jicce.2015.13.3.167.
    CrossRef
  8. A. C. Najarro and S. M. Kim, “Predistortion for frequency-dependent nonlinearity of a laser in RoF systems,” Journal of Information and Communication Convergence Engineering, vol. 14, no. 3, pp. 147-152, 2016. DOI: 10.6109/jicce.2016.14.3.147.
    CrossRef
  9. D. H. Nguyen and B. Widrow, “Neural networks for self-learning control systems,” IEEE Control Systems Magazine, vol. 10, no. 3, pp. 18-23, 1990. DOI: 10.1109/37.55119.
    CrossRef
  10. J. Gill, B. Singh, and S. Singh, “Training backpropagation neural networks with genetic algorithm for weather forecasting,” in Proceedings of the 8th International Symposium on Intelligent Systems and Informatics, Subotica, Serbia, pp. 465-469, 2010. DOI: 10.1109/SISY.2010.5647319.
    CrossRef
  11. C. Ozkan and F. Sunar, “The comparison of activation functions for multispectral Landsat TM image classification,” Photogrammetric Engineering & Remote Sensing, vol. 69, no. 11, pp. 1225-1234, 2003. DOI: 10.14358/PERS.69.11.1225.
    CrossRef
  12. Caltech, “Machine learning (JC_lecture 10 sides),” [Internet], Available: https://itunes.apple.com/us/course/machine-learning/id515364596.
  13. A. Reynaldi, S. Lukas, and H. Margaretha, “Backpropagation and Levenberg–Marquardt algorithm for training finite element neural network,” in Proceedings of the 6th UKSim/AMSS European Symposium on Computer Modeling and Simulation, Valetta, Malta, pp. 89-94, 2012. DOI: 10.1109/EMS.2012.56.
    CrossRef

Andres Caceres Najarro

received a B.Sc. degree from the Peruvian University of Applied Sciences, Lima, Peru, in 2010 and an M.S. degree from Kyungsung University, Busan, Korea, in 2016. He is currently a Ph.D. candidate at Gwangju Institute of Science and Technology (GIST). He was awarded the Graña y Montero Peruvian Engineering Research Award, 4th edition. He was employed as a project developer engineer for two years. His research interests include radio over fiber, millimeter wave communication, terahertz wave communications and free-space optical communication.


Sung-Man Kim

received B.S., M.S., and Ph.D. degrees in electrical engineering from Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Korea, in 1999, 2001, and 2006, respectively. His main interests during the M.S. and Ph.D. courses included performance monitoring in optical fiber communication systems. From 2006 to 2009, he was a senior engineer in the network R&D center, Samsung Electronics, Suwon, Korea, where he engaged in the research and development of Mobile WiMAX. Since 2009, he has been a faculty member in the department of electronic engineering, Kyungsung University, Busan, Korea. His current research interests include optical fiber communications, mobile communications, visible light communications, and optical power transmission.


Article

Journal of information and communication convergence engineering 2018; 16(1): 1-5

Published online March 31, 2018 https://doi.org/10.6109/jicce.2018.16.1.1

Copyright © Korea Institute of Information and Communication Engineering.

Nonlinear Compensation Using Artificial Neural Network in Radio-over-Fiber System

Andres C. Najarro,Sung-Man Kim

Gwangju Institute of Science and Technology,Kyungsung University

Received: April 29, 2017; Accepted: January 2, 2018

Abstract

In radio-over-fiber (RoF) systems, nonlinear compensation is very important to meet the error vector magnitude (EVM) requirement of the mobile network standards. In this study, a nonlinear compensation technique based on an artificial neural network (ANN) is proposed for RoF systems. This technique is based on a backpropagation neural network (BPNN) with one hidden layer and three neuron units in this study. The BPNN obtains the inverse response of the system to compensate for nonlinearities. The EVM of the signal is measured by changing the number of neurons and the hidden layers in a RoF system modeled by a measured data. Based on our simulation results, it is concluded that one hidden layer and three neuron units are adequate for the RoF system. Our results showed that the EVMs were improved from 4.027% to 2.605% by using the proposed ANN compensator.

Keywords: Artificial neural network, Nonlinear compensation, Radio over fiber

I. INTRODUCTION

Artificial neural networks (ANNs) are now being used in telecommunications. Proposals have been made to employ neural networks (NNs) as neural de-multiplexers [1] or equalizers in optical communications [2]. In this study, we use an ANN for nonlinear compensation in a radio-over-fiber (RoF) system.

RoF technology is considered a strong candidate for a future front-haul link in mobile networks [3, 4]. The front-haul link is the component of a mobile network between the central digital units and remote units. Although the common public radio interface (CPRI) or open base station architecture initiative (OBSAI) is currently used as the fronthaul link, these technologies cannot support the capacity of future mobile networks [5]. For example, an approximately 120 Gb/s CPRI interface is required to support a remote unit composed of three sectors with two 20-MHz radio channel bandwidths and an 8×8 multiple-input multiple-output (MIMO) scheme. Moreover, to support the massive MIMO scheme proposed in 5G mobile technologies, neither CPRI nor OBSAI is a reasonable solution because several Tb/s links would be required for a remote unit.

RoF technology has thus been proposed to efficiently support increased network capacity [6]. In RoF systems, several analog radio signals are multiplexed using frequency division multiplexing (FDM) and transmitted as an analog optical transmission. Therefore, the signal quality of RoF systems can be easily degraded by nonlinearity, which usually limits their performance and makes it difficult to meet the error vector magnitude (EVM) requirement of mobile communication standards. Consequently, nonlinear compensation is important in RoF systems [7, 8]. In this study, we investigate the use of ANN technology to compensate the nonlinearity in RoF systems.

II. ARTIFICIAL NEURAL NETWORK COMPENSATOR

An ANN is a mathematical model of a biological neuron. The simplest NN is called a perceptron and is illustrated in Fig. 1. The dendrites receive stimuli (input), the cell body processes the stimuli (activation function), and the axon finally transmits the information (output). The perceptron has only two layers: input and output. The input layer receives information; the output layer modifies it according to input weights, biases, and the activation function. The mathematical model of a perceptron is shown in Eq. (1).

Figure 1. The neural network: (a) biological neuron and (b) its computational model.

y(x)=f(i=1nxibi)=1 ,

where xi, wi and bi denote NN inputs, weights, and bias, respectively. This single-layer perceptron cannot be used to solve complex problems. A multilayer perceptron (MLP), in contrast to a single-layer perceptron, has more than two layers and can solve complex problems. The layers between the input and output layers are called hidden layers. With just one hidden layer, the MLP can be a powerful computation tool [9].

The MLP weights are generally initialized using a back-propagation algorithm. Backpropagation is similar to a least minimum squares algorithm and requires finding the minimum error function to initialize NN weights (a learning process). This algorithm uses gradient descent to search for the minimum error function in the weight space. The error equation is obtained from the difference of the NN output and the desired output, as shown in Eq. (2). The weight update is carried out in different epochs according to Eq. (3).

e=(yyd)2 ,

w(t+1)=w(t)+η(e)(w).

The activation function can be chosen independently of the neuron location (i.e., layer). However, as the backpropagation algorithm uses the gradient descent, the activation function plays the important role of guaranteeing the continuity and differentiability of the error function. Hence, it is necessary to use mathematically convenient activation functions, the most popular of which is the sigmoid function [10, 11].

ANN weights represent the connection values between the input and hidden, hidden and hidden, and hidden and output layers. As previously mentioned, the weight update is carried out by the backpropagation algorithm, which works in two steps. First, it calculates forward information, the layer outputs, as shown in Eq. (4). Second, it conducts the gradient descent using the output information, as shown in Eq. (5) [12].

xj(l)=f(sj(l))=f(i=0d(l1)wij(l)xi(l1)) ,

e(w)=e(w)wij(l).

The value of each weight changes from layer to layer and is represented by wijl, where l represents the layer’s index and can take values from 1 ≤ lL. Here, L is the index of the NN output layer; i and j represent the input and output indices and can take values from 0 ≤ in(l-1) and 1 ≤ jn(l), respectively; and n represents the number of neuron units per layer.

Applying the chain rule in Eq. (5), we can obtain the following equation:

e(w)wij(l)=e(w)sj(l)sj(l)wij(l).

From the second term in Eq. (6), the partial derivative of the layer’s output with respect to the weight is a known value, the input of layer (xi(l1)). The unique unknown term is the partial derivative of the error with respect to the layer’s output, as shown in Eq. (7).

δjl=e(w)sj(l).

To calculate δ for all layers, it is necessary to find δ in the last layer when l = L because calculating the last δ enables the determination of previous ones. In the last layer, δ is obtained by simply forwarding the information from the input to the output. The main difficulty is to obtain δ in the previous layers. However, if the chain rule is applied, a new and easier relation can be found, as shown in Eq. (8).

δil1=e(w)si(l1)=e(w)sj(l)sj(l)xi(l1)xi(l1)si(l1).

The operation must be performed for all the layer units. Thus, Eq. (8) acquires a new form, which is shown in Eq. (9).

δil1=j=1n(l)e(w)sj(l)sj(l)xi(l1)xi(l1)si(l1),

Calculating the partial derivatives, we clearly obtain the relation between the last and previous δ, as shown in Eq. (10).

δil1=j=1n(l)δjlwij(l)f'(si(l)).

This equation explicitly demonstrates that information propagates in the backward direction to calculate the previous δ, hence the name ‘backpropagation.’

The nonlinear compensator is a feed-forward NN, as shown in Fig. 2, which uses a backpropagation algorithm for initializing neural weights. In this unidirectional NN, the activation function can be any differentiable function, such as a log-sigmoid transfer function, a linear transfer function, or a tan-sigmoid transfer function. The activation function can be chosen independently of the neuron location. In this work, we created a neural compensator with a tan-sigmoid transfer function (f1(x)) in the hidden layer and a linear transfer function (f2(x)) in the output layer. The output of this NN can be mathematically represented as in Eq. (11).

Figure 2. Structure of an ANN.

y(x)=f2(f1(x×IW+b1)×HW+b2) ,

where

I

= Input,

IW

= Input weights,

HW

= Hidden layer weights,

b1, b2

= Bias,

f1

= Tan-sigmoid transfer function,

f2

= Linear transfer function.

Backpropagation training can use various optimization algorithms, such as Levenberg–Marquardt optimization, quasi-Newton backpropagation, or gradient descent. In this work, we use Levenberg–Marquardt optimization to update weights and bias states. This algorithm is faster in reaching the minimum error than others; however, it requires more memory [13].

III. RESULTS AND DISCUSSION

Fig. 3 illustrates the simulation setup for an ANN compensator in a RoF system. An orthogonal FDM (OFDM) signal is assumed to be the radio signal. The ANN training process is performed offline. Each time the system’s physical structure or transmission configuration varies, the ANN compensator should be trained again and new weights must be determined. During training, the target is obtained from the transmitter block and the input is obtained from the receiver photodiode.

We assumed that the radio signal was an OFDM signal with 64 carriers, 16-quadrature amplitude modulation (QAM), and a central frequency of 2.2 GHz. To emulate laser nonlinearity, we measured the nonlinear response of the laser and inserted this nonlinear function as the distortion block, as shown in Fig. 3.

Figure 3. Simulation setup for the ANN compensator in the RoF system.

We investigated the optimal number of neurons in the hidden layer by measuring the radio signal EVM resulting from networks having between one and ten units, as shown in Table 1. From these results, we concluded that three neural units were sufficient for this system. This is because more neural units did not improve EVM, while it required more processing power. We additionally investigated the effect of hidden layers in an ANN compensator. We changed the number of hidden layers from one to three. However, no considerable change in EVM values was observed. Thus, we concluded that one hidden layer was adequate for this ANN compensator.

EVM results according to number of hidden layer neural units
Number of neuron unitsMean squared error (ETrain)EVM (%)
10.0249973.648
20.0248772.796
30.0251532.605
40.0251532.758
50.0251322.728
100.0248162.669

Therefore, we used an ANN compensator with one hidden layer comprising three neural units for the RoF system. In the ANN training process, the ANN finds the minimum mean squared error value. We hence determined that our ANN achieved the minimum value, 0.025153, after just eight epochs. The required time for the training process was approximately 70 seconds.

To investigate the performance of our ANN compensator, we measured the EVM of the received signal with and without it. The results are shown in Table 2. The EVM of the signal is improved from 4.027% to 2.605%, as illustrated in Fig. 4. In Fig. 4, the square points are the ideal QAM points and the circular points are the received points. Signal constellation without the ANN compensator in Fig. 4(a) shows relatively large EVM, whereas the constellation with the ANN compensator in Fig. 4(b) shows relatively small EVM. For example, it can be noticed that the EVM of the mark in Fig. 4(a) was reduced to the mark in Fig. 4(b).

System EVM with and without the ANN compensator
CaseEVM (%)
  System with ANN compensator2.605
  System without ANN compensator4.027

Figure 4. Signal constellation (a) without and (b) with the ANN compensator. EVM of the signal is improved from (a) 4.027% to (b) 2.605%.

IV. CONCLUSION

In this work, we proposed an ANN nonlinear compensator for RoF systems. The ANN compensator is a feed-forward supervised NN with three neuron units in one hidden layer whose weights are determined by a backpropagation algorithm. Our results showed that the EVM improved by more than 35% (from 4.027% to 2.605%) by using the proposed ANN compensator. This work is only an initial step toward using an ANN compensator in a RoF system. Nonetheless, we believe that ANN compensators can be widely used for nonlinear compensation in optical fiber communications.

Fig 1.

Figure 1.The neural network: (a) biological neuron and (b) its computational model.
Journal of Information and Communication Convergence Engineering 2018; 16: 1-5https://doi.org/10.6109/jicce.2018.16.1.1

Fig 2.

Figure 2.Structure of an ANN.
Journal of Information and Communication Convergence Engineering 2018; 16: 1-5https://doi.org/10.6109/jicce.2018.16.1.1

Fig 3.

Figure 3.Simulation setup for the ANN compensator in the RoF system.
Journal of Information and Communication Convergence Engineering 2018; 16: 1-5https://doi.org/10.6109/jicce.2018.16.1.1

Fig 4.

Figure 4.Signal constellation (a) without and (b) with the ANN compensator. EVM of the signal is improved from (a) 4.027% to (b) 2.605%.
Journal of Information and Communication Convergence Engineering 2018; 16: 1-5https://doi.org/10.6109/jicce.2018.16.1.1
EVM results according to number of hidden layer neural units
Number of neuron unitsMean squared error (ETrain)EVM (%)
10.0249973.648
20.0248772.796
30.0251532.605
40.0251532.758
50.0251322.728
100.0248162.669

System EVM with and without the ANN compensator
CaseEVM (%)
  System with ANN compensator2.605
  System without ANN compensator4.027

References

  1. T. Matsumoto, M. Koga, K. Noguchi, and S. Aizawa, “Proposal for neural-network applications to fiber-optic transmission,” in Proceedings of the International Joint Conference on Neural Networks, San Diego, CA, pp. 75-80, 1990. DOI: 10.1109/IJCNN.1990.137549.
    CrossRef
  2. T. F. B. de Sousa and M. A. C. Fernandes, “Multilayer perceptron equalizer for optical communication systems,” in Proceedings of the 2013 SBMO/IEEE MTT-S International Microwave & Optoelectronics Conference, Rio de Janeiro, Brazil, pp. 1-5, 2013. DOI: 10.1109/IMOC.2013.6646479.
    CrossRef
  3. R. Abdolee, R. Ngah, V. Vakilian, and T. A. Rahman, “Application of radio-over-fiber (ROF) in mobile communication,” in Proceedings of the Asia-Pacific Conference on Applied Electromagnetics, Melaka, Malaysia, pp. 1-5, 2007. DOI: 10.1109/APACE.2007.4603945.
    CrossRef
  4. M. Xu, F. Lu, J. Wang, L. Cheng, D. Guidotti, and G. K. Chang, “Key technologies for next-generation digital RoF mobile front-haul with statistical data compression and multiband modulation,” Journal of Lightwave Technology, vol. 35, no. 17, pp. 3671-3679, 2017. DOI: 10.1109/jlt.2017.2715003.
    CrossRef
  5. S. M. Kim, “Limits of digital unit-remote radio unit distance and cell coverage induced by time division duplex profile in mobile WiMAX systems,” International Journal of Communication Systems, vol. 26, no. 2, pp. 250-258, 2013. DOI: 10.1002/dac.1356.
    CrossRef
  6. S. H. Cho, H. Park, H. S. Chung, K. H. Doo, S. S. Lee, & J. H. Lee, “Cost-effective next generation mobile fronthaul architecture with multi-IF carrier transmission scheme,” in Proceedings of the Optical Fiber Communications Conference and Exhibition, San Francisco, CA, 2014. DOI: 10.1364/OFC.2014.Tu2B.6.
    CrossRef
  7. S. M. Kim, “Nonlinearity detection and compensation in radio over fiber systems using a monitoring channel,” Journal of Information and Communication Convergence Engineering, vol. 13, no. 3, pp. 167-171, 2015. DOI: 10.6109/jicce.2015.13.3.167.
    CrossRef
  8. A. C. Najarro and S. M. Kim, “Predistortion for frequency-dependent nonlinearity of a laser in RoF systems,” Journal of Information and Communication Convergence Engineering, vol. 14, no. 3, pp. 147-152, 2016. DOI: 10.6109/jicce.2016.14.3.147.
    CrossRef
  9. D. H. Nguyen and B. Widrow, “Neural networks for self-learning control systems,” IEEE Control Systems Magazine, vol. 10, no. 3, pp. 18-23, 1990. DOI: 10.1109/37.55119.
    CrossRef
  10. J. Gill, B. Singh, and S. Singh, “Training backpropagation neural networks with genetic algorithm for weather forecasting,” in Proceedings of the 8th International Symposium on Intelligent Systems and Informatics, Subotica, Serbia, pp. 465-469, 2010. DOI: 10.1109/SISY.2010.5647319.
    CrossRef
  11. C. Ozkan and F. Sunar, “The comparison of activation functions for multispectral Landsat TM image classification,” Photogrammetric Engineering & Remote Sensing, vol. 69, no. 11, pp. 1225-1234, 2003. DOI: 10.14358/PERS.69.11.1225.
    CrossRef
  12. Caltech, “Machine learning (JC_lecture 10 sides),” [Internet], Available: https://itunes.apple.com/us/course/machine-learning/id515364596.
  13. A. Reynaldi, S. Lukas, and H. Margaretha, “Backpropagation and Levenberg–Marquardt algorithm for training finite element neural network,” in Proceedings of the 6th UKSim/AMSS European Symposium on Computer Modeling and Simulation, Valetta, Malta, pp. 89-94, 2012. DOI: 10.1109/EMS.2012.56.
    CrossRef
JICCE
Sep 30, 2024 Vol.22 No.3, pp. 173~266

Stats or Metrics

Share this article on

  • line

Journal of Information and Communication Convergence Engineering Jouranl of information and
communication convergence engineering
(J. Inf. Commun. Converg. Eng.)

eISSN 2234-8883
pISSN 2234-8255