Search 닫기

Regular paper

Split Viewer

Journal of information and communication convergence engineering 2023; 21(1): 45-53

Published online March 31, 2023

https://doi.org/10.56977/jicce.2023.21.1.45

© Korea Institute of Information and Communication Engineering

Implementation of a Sightseeing Multi-function Controller Using Neural Networks

Jae-Kyung Lee and Jae-Hong Yim*

Department of Electronics and Communication Engineering, College of Engineering, Korea Maritime and Ocean University, Busan, 49112, Korea

Correspondence to : Jae-Hong Yim (E-mail: jhyim@kmou.ac.kr, Tel: +82-51-410-4318)
Department of Electronics and Communication Engineering, College of Engineering, Korea Maritime and Ocean University, Busan, 49112, Republic of Korea

Received: May 4, 2022; Revised: January 8, 2023; Accepted: January 19, 2023

This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/3.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

This study constructs various scenarios required for landscape lighting; furthermore, a large-capacity general-purpose multi-functional controller is designed and implemented to validate the operation of the various scenarios. The multi-functional controller is a large-capacity general-purpose controller composed of a drive and control unit that controls the scenarios and colors of LED modules and an LED display unit. In addition, we conduct a computer simulation by designing a control system to represent the most appropriate color according to the input values of the temperature, illuminance, and humidity, using the neuro-control system. Consequently, when examining the result and output color according to neuro-control, unlike existing crisp logic, neuro-control does not require the storage of many data inputs because of the characteristics of artificial intelligence; the desired value can be controlled by learning with learning data.

Keywords Back-propagation, Learning, LED lighting, Multi-function, Neural network

Numerous support for landscape lighting is required in the future to create a smart city; however, owing to the lack of technology and the burden of designing a scenario-creation program, this is presently impossible. Therefore, this study designs and implements a large-capacity general-purpose landscape lighting multifunction control system to be easily used by a landscape lighting installer without requiring a special control system design and program configuration. Furthermore, this system is designed to be compatible with both 220 and 110 V AC worldwide. Based on local and foreign market surveys, 25 scenarios and 16-color high-capacity artificial controls can be used to create high-capacity, intelligence- driven, multi-color programs. The load capacity for landscape lighting was designed and manufactured such that the landscape lighting project could be easily connected and used in a large capacity of 3,000 W [1].

Artificial intelligence was added to the program to create the most appropriate emotional lighting suitable for the atmosphere; a neuro-control system was used to implement artificial intelligence.

We plan to install temperature and illumination sensors on the system semiconductor board to adapt to the atmosphere, that is, hot, cold, bright, and dark environments, as well as develop it as a better function after market research [2].

Existing landscape lighting controllers do not have various landscape scenarios, are simple, and are unsuitable for smart landscape lighting; Therefore, this study constructs the functions of various scenarios and adds neural networks to make the color smart.

The controller was configured, implemented, and a computer simulation was performed.

Previous studies on lighting control systems used conventional control; however, this study investigates a neural network and compares it with the actual experimental production operation and computer simulation. Consequently, unlike conventional control, neural networks must not store many data input values and are simply learned from the learning data to control the desired values. Therefore LED lighting control through artificial intelligence is a more organic and efficient system than conventional LED lighting control [3].

In the 20th century, advances in semiconductor technology invented a technology that emits light from magic stones. With the recent development of semiconductor light-emitting diodes (LEDs) that are sufficiently bright for lighting, a novel solid-state lighting technology has emerged.

This technology rapidly penetrates the existing lighting device field by applying the unique light-emitting characteristics of LED light sources, and this technology may be widely used in general fluorescent lighting devices in the future. The evolution of light sources for lighting is shown in Fig. 1 [4].

Fig. 1. Brief history of lighting.

The main lighting characteristics of the LED light sources are summarized as follows. Structurally, unlike conventional light sources, LED lights are solid-structured small-point light sources, which do not use glass electrodes, filaments, or mercury (Hg); therefore, they are solid, have a long life, and are environmentally friendly. Accordingly, the lighting technology that uses LEDs is called semiconductor lighting technology, which uses a solid-structured light source, unlike conventional lighting technologies. Optically vivid monochromatic light emits poor colorability, light loss is small, visibility is improved when applied to lighting fixtures requiring particular colors (or wavelengths), and lamp loss can be significantly decreased as a directional light source. In addition, it has a better dimming control ability compared with existing light sources; thus, it can easily emit various colors [5].

Electrically, a DC driving light source (AC is also possible owing to diode characteristics) starts lighting above a specific voltage. After lighting, the current and light intensity change sensitively even with a small voltage change. In addition, because the rated voltage changes depending on the ambient temperature, the environmental adaptation characteristics become poor when driving at a constant voltage, and in principle, it is driven by a constant current source. Accordingly, to safely turn on an LED lighting device, a dedicated power supply device (ballast) suitable for the characteristics of the LED lamp is required. Environmentally, when the temperature increases, the allowable current and light power decrease, a large quantity of heat is generated, and the dynamic characteristics change sensitively to changes in ambient and operating temperatures. Suppose a current that exceeds the allowable current flows. In that case, the life is significantly reduced and the performance is significantly degraded; therefore, appropriate heat treatment technology is required in addition to a dedicated power supply.

An electrical dynamic characteristic LED has a unique characteristic. The electrical polarity matches owing to the characteristics of the diode, the current increases rapidly above a certain voltage, and the brightness is directly proportional to the current. The rated driving voltage of a single LED varies according to the light-emitting color (semiconductor type) and slightly at ambient temperature. Generally, they operate at low voltages of 2-4 V [6].

In thermal dynamic LEDs, the light output and efficiency increase with a decrease in the temperature of the junction part, even if the current flow is constant. This is unlike conventional light sources (incandescent and fluorescent lamps). Therefore, the higher the temperature, the lower the light output and light efficiency; if necessary, the heat generated from the joint must be properly dissipated to improve the lighting performance.

An LED is a solid light-emitting device without filaments and bulbs. Suppose an appropriate power supply and radiator are used. In that case, it can be used for more than 100,000 h to maintain the lighting state without burnout. Accordingly, LEDs are sometimes referred to as semi-permanent light sources. However, all light sources gradually lose their light output with time, which is less visible to humans to 80% of their initial light intensity. By this standard, the lifetime of the LED is currently estimated to be approximately 40,000 to 50,000 h. Therefore, compared with the lifetimes of 1,500 h and 10,000 h for incandescent bulbs and fluorescent lamps, respectively, LEDs can be considered a long-life light source with an extremely long life.

Because monochromatic light emission in a narrow wavelength band and high-visibility LEDs emit monochromatic light in a narrow wavelength band determined by the semiconductor type, excellent lighting performance and effective light emission efficiency can be expected when applied to lighting equipment requiring particular colors. For example, a traffic light using a 15 lm/W incandescent bulb has a red transmittance of approximately 10% and a light emission efficiency of 1.5 lm/W, whereas an LED emits clear red itself by more than 30 lm/W; thus, energy savings of more than 90% are possible compared with the light bulb type. In addition, LED lighting is expected to reduce maintenance costs owing to long life and traffic accidents owing to improved visibility. Major applications include emotional lighting, traffic lights, aviation obstacles, emergency exits, and lighting equipment that requires particular colors.

Conventional control theories that rely entirely on the mathematical models of control plants have shown limitations in dealing with such systems. Obtaining accurate mathematical models for most real-world dynamic systems is challenging because they are nonlinear and often time-varying with uncertain elements.

Neural networks are a part of artificial intelligence involved in the creation of systems similar to humans that can analyze situations on their own by accumulating knowledge and experience through learning. Neural networks aim to simplify the brain's biological neurons and their associations as well as model them mathematically to mimic the intelligent actions of the brain. Neural networks are conceptually simple and learn by organizing their internal structures for a given input [7].

Studies on neural networks were first conducted by McCulloch and Fitz in 1943. They considered the human brain as a calculator composed of numerous nerve cells. In addition, we showed a model that performs simple logical tasks, recognizing the significance of pattern classification for identifying intelligent human behavior. Hebb proposed the first learning rule to adjust the weight between two neurons; this study had a significant influence on the study of adaptive neural networks. In 1957, Rosenblatt published a neural model called the “perceptron” that enabled the study of practical neural networks. After Minsky and Papert et al. extensively analyzed the perceptron model mathematically and observed they could not solve simple nonlinear problems, such as the XOR function, research on neural networks was in recession for 20 years [8].

However, in the 1980s, neural networks were newly developed by Hopfield et al. The error backpropagation algorithm, the most commonly used learning algorithm for neural networks, was established by Werbos and Parker. Current neural networks, which have developed into this historical background, are widely applied to various fields, such as pattern recognition, voice recognition, control systems, medical diagnosis, and communication systems. First, this section examines the structures of neural networks, learning algorithms, and neural network models for pattern recognition and prediction to determine the use of neural networks in this study [9-10].

A. Structure and Learning of Multi-Layered Neural Networks

The human brain is connected by numerous neurons. Therefore, artificial neural networks similar to the human brain can be called multi-layered structures and can be performed with better performance by interconnected neurons. In general, larger networks provide a larger computational capacity. The arrangement of neurons in layers mimics the layered structures of the brain. The most commonly used neural network structure in applications, such as pattern and system recognition or control is a multi-layered neural network with error backpropagation algorithms. A typical multi-layered neural network is shown in Fig. 2 [11].

Fig. 2. Multi-layer neural network.

Each circle in Fig. 2. is a neuron. This neural network consists of an input layer with an input vector of x and an output layer with an output vector of y; the layer between the input and output layer is referred to as a hidden layer.

In Fig. 2, Oι, Oj, Oκ are the outputs of each neuron of the input, hidden, and output layers; the weight between the input and hidden layers is denoted as Wji, whereas that between the hidden and output layers is denoted as Wkj. Information is stored in the weights of the neural networks and the components of the weights, Wji and Wkj are constantly replaced with new information during the learning process. In general, the representative algorithm of a neural network that changes new information is the error backpropagation algorithm. The error backpropagation algorithm is a least-mean square that self-subtracts and minimizes the error between the output of the final output layer neuron and the desired output calculated by each neuron in the neural network [12-13].

First, the error backpropagation learning algorithm sends the input x of the neural network to the hidden layer in the input layer. Second, neurons in the hidden layer add the product of the weights from each input layer and send the result calculated through the activation function to the output layer. The output layer outputs the same neuronal operations as the hidden layer. Currently, the output value of the neural network differs from the desired target value and this difference is called an error. To minimize this error, the weight is adjusted by partial differentiation of the error vector term of the weight in each layer. That is, after calculating the error of the output layer and the desired target value, the weight is adjusted by the amount of weight change owing to the error caused by backpropagation from the output to the hidden layer, called error backpropagation.

The error backpropagation learning algorithm is mathematically represented as follows.

First, the neuron outputs of the input, hidden, and output layers are as shown in (2), (4), and (6).

n e t i = x i i = 1 , 2 , 3 , , n O i = λ f n e t i n e t i = i W j i O i O i j = λ f n e t j n e t k = k W k i O i O k = λ f n e t k

where f is the activation function, net is the product of the previous neuron output and the weights in the current layer, and λ is the slope of the activation function. To learn the neural network, the error should be obtained, which is the difference between the output value of the neural network and the desired target value; this error is obtained as shown in (7).

E = 1 2 k D k O k

Because learning aims to minimize the error E by adjusting the weight, the weight is changed in a negative gradient direction to minimize errors. Therefore, the weight change may be obtained by partial differentiation of the weight direction vector with respect to the error in the negatively inclined direction. The weight change in each layer is expressed as follows:

Δ W k i = η E W k i , η < C

where η is a constant representing the learning rate. In addition, (8) can be expressed using the chain rule as follows:

E W k i = E O k O k n e t k n e t k W k i = 1 2 D k O k 2 O k f n e t k n e t k W k i O i W k i

Suppose the activation function is linear. In that case, the weight change is expressed as follows:

Δ W k i = η D k O k O i

The weight change with respect to the hidden layer was also changed in a negative gradient direction to minimize errors.

Δ W j i = η E W j i , η < C

Using the chain rule, (11) can be written as follows:

E W j i = E O k O k n e t k n e t k O j O j W j i = 1 2 D k O k 2 O k f n e t k n e t k W k i O i W k i W j i O i W j i

Using (12), the amount of change in the weight between the input and hidden layers is expressed as follows:

Δ W j i = η D k O k W k i O i

Therefore, the changes in weight are as follows.

W j i = W j i + Δ W j W k i = W k i + Δ w k

As previously explained, error backpropagation algorithms require desired response values to calculate error signals and adjust the weights of neural networks. After the initial learning, the neural network enters a new set of data that has not been used for learning.

The accuracy of a network with data other than a set of learned data provides the neural network with the ability to generalize. This refers to the reliability of neural networks. After the learning and testing steps, neural networks can be used to model pattern classifiers, unknown nonlinear functions, and complex processes.

When learning neural networks, initial weights are set to small random values and because this initialization influences the final output, the initial weights typically use a value between −0.5 and 0.5. In addition, the degree of convergence of the error backpropagation algorithm may vary depending on the learning rate. The learning rate is selected differently depending on the structure and application of the neural network. No fixed criterion exists (a value generally ranging from 0 to 1). Suppose a large learning rate is used. In that case, overshoots may occur and the learning speed may decrease if a small learning rate is used; therefore, it is appropriately selected within the aforementioned range.

This study uses three input variables and three output variables in the structure of the multi-layer neural network, as shown in Fig. 2. The neurons were composed of three input, 10 hidden, and three output layer neurons, and were designed as i = 3, j = 10, and k = 3. Here, a computer simulation was conducted by designing the desired output R, G, and B values according to the input values of illuminance, temperature, and humidity. In addition, the R, G, and B values of the output were combined to express the LED color.

The internal configuration of the LED lighting control system for landscape lighting comprises a power supply unit, an AVR control unit, a CLCD output section unit, and an LED control unit.

A. Power Supply Unit

A switched-mode power supply (SMPS), a device that converts DC voltage into a square wave voltage using IC devices, such as power transistors, and outputs DC voltage after smoothing using a filter was used to apply power to the LED lighting control board. The maximum voltage used in the control board was 12 V DC; thus, an AC/DC converter was used to convert 220 V AC to 12 V DC.

B. AVR Control Unit

The MCU of this system used the ATmega128 model, which is an 8-bit RISC microcontroller from Atmel.

The AVR control unit receives the ADC values of the illuminance and temperature sensors; the humidity sensor controls the color of each RGB LED module. Color control of the RGB LED DRIVE was performed through a timer/ counter function. Utilizing this function, we created a pulsewidth modulation (PWM) output to adjust the luminance ratio of red, green, and blue. A 16-bit counter was used as the timer/counter. Among the various modes, such as FAST PWM and CTC modes, the CTC mode was used. During counting, the CTC mode continuously compared the counting and OCR values and outputted a matching signal when the counting and OCR values were equal, thereby outputting a pulse waveform to the waveform generator.

C. CLCD Output Section Unit

The LCD used was a typical 16 × 4 line character LCD (CLCD). It was implemented such that the PWM values of red, green, and blue currently output through the CLCD can be checked in real time.

D. LED control unit

An RGB LED module was used to provide the calculated value in the RGB color. Because the output value from the MCU was 5 V DC, which is the TTL voltage level, the RGB LED module could not be driven. The driving voltage of the RGB LED module was 12 V DC, which was directly applied by the SMPS unit (the power supply unit). The high and low outputs from the MCU were connected to the gate of the MOSFET and served to switch the RGB LED module on and off. The color was expressed by applying power to the module according to the signal from the gate end.

The RGB LED module had four lines, R, G, B, and COM, and the MOSFET was connected to the other three lines, except COM, to control the output color.

A large-capacity general-purpose multi-function controller for landscape lighting was constructed, as shown in Fig. 3. The scenario actions, such as “switching 16-color panorama program”, “16-color program with dimming”, and “16-color program with 3 flashes”. The operation of each scenario can be selected ON/OFF the selection switch and the operation speed of the operation scenario was adjusted using a separate speed control switch from the slowest 0 to the fastest 9. The speed is expressed numerically in the 7-segment. The picture of large-capacity general-purpose multi-functional controllers for landscape lighting is shown in Fig. 3.

Fig. 3. Multi-function controller.

In the discussion after the experiment, we configured the sightseeing multi-function controller that was designed and manufactured, as shown in Fig. 3. All the configured scenarios worked well.

Using the multi-function and neural networks, the actual configuration of a smart landscape lighting controller comprised 25 types of scenarios and 16 color expressions. The simulation results are listed in Tables 1, 2, and 3. In addition, the input voltage of the controller was designed for both 12 and 24 V DC.

R, G, and B output values according to the illuminance, temperature, and humidity input (learning data)

Illuminance (lux) Temperature (°C) Humidity (%) R, G, B (OUT) LED Color
No. 1 10 10 30 R=255, G=0, B=0 Red
No. 2 10 10 50 R=224, G=255, B=255 Light Cyan
No. 3 10 10 70 R=255, G=165, B=0 Orange
No. 4 10 25 30 R=173, G=255, B=47 Green Yellow
No. 5 10 25 50 R=255, G=255, B=0 Yellow
No. 6 10 25 70 R=30, G=144, B=255 Dodger Blue
No. 7 10 40 30 R=0, G=255, B=255 Cyan
No. 8 10 40 50 R=224, G=255, B=255 Light Cyan
No. 9 10 40 70 R=238, G=130, B=238 Violet
No. 10 25 10 30 R=255, G=182, B=193 Light Pink
No. 11 25 10 50 R=128, G=0, B=128 Purple
No. 12 25 10 70 R=255, G=182, B=193 Light Pink
No. 13 25 25 30 R=255, G=255, B=0 Yellow
No. 14 25 25 50 R=0, G=255, B=0 Green
No. 15 25 25 70 R=30, G=144, B=255 Dodger Blue
No. 16 25 40 30 R=255, G=0, B=255 Magenta
No. 17 25 40 50 R=238, G=130, B=238 Violet
No. 18 25 40 70 R=173, G=255, B=47 Green Yellow
No. 19 40 10 30 R=255, G=255, B=0 Yellow
No. 20 40 10 50 R=255, G=165, B=0 Orange
No. 21 40 10 70 R=0, G=255, B=255 Cyan
No. 22 40 25 30 R=173, G=255, B=47 Green Yellow
No. 23 40 25 50 R=255, G=182, B=193 Light Pink
No. 24 40 25 70 R=128, G=0, B=128 Purple
No. 25 40 40 30 R=0, G=255, B=255 Cyan
No. 26 40 40 50 R=255, G=0, B=255 Magenta
No. 27 40 40 70 R=0, G=0, B=255 Blue


Comparison of crisp-neural output results under the same conditions

Illuminance (lux) Temperature (°C) Humidity (%) Crisp Output Neural Output LED Color
No.1 10 10 30 R=255, G=0, B=0 R=255, G=0, B=0 Red
No.2 10 10 50 R=224, G=255, B=255 R=224, G=255, B=255 Light Cyan
No.3 10 10 70 R=255, G=165, B=0 R=255, G=165, B=0 Orange
No.4 10 25 30 R=173, G=255, B=47 R=173, G=255, B=47 Green Yellow
No.5 10 25 50 R=255, G=255, B=0 R=255, G=255, B=0 Yellow
No.6 10 25 70 R=30, G=144, B=255 R=30, G=144, B=255 Dodger Blue
No.7 10 40 30 R=0, G=255, B=255 R=0, G=255, B=255 Cyan
No.8 10 40 50 R=224, G=255, B=255 R=224, G=255, B=255 Light Cyan
No.9 10 40 70 R=238, G=130, B=238 R=238, G=130, B=238 Violet
No.10 25 10 30 R=255, G=182, B=193 R=255, G=182, B=193 Light Pink
No.11 25 10 50 R=128, G=0, B=128 R=128, G=0, B=128 Purple
No.12 25 10 70 R=255, G=182, B=193 R=255, G=182, B=193 Light Pink
No.13 25 25 30 R=255, G=255, B=0 R=255, G=255, B=0 Yellow
No.14 25 25 50 R=0, G=255, B=0 R=0, G=255, B=0 Green
No.15 25 25 70 R=30, G=144, B=255 R=30, G=144, B=255 Dodger Blue
No.16 25 40 30 R=255, G=0, B=255 R=255, G=0, B=255 Magenta
No.17 25 40 50 R=238, G=130, B=238 R=238, G=130, B=238 Violet
No.18 25 40 70 R=173, G=255, B=47 R=173, G=255, B=47 Green Yellow
No.19 40 10 30 R=255, G=255, B=0 R=255, G=255, B=0 Yellow
No.20 40 10 50 R=255, G=165, B=0 R=255, G=165, B=0 Orange
No.21 40 10 70 R=0, G=255, B=255 R=0, G=255, B=255 Cyan
No.22 40 25 30 R=173, G=255, B=47 R=173, G=255, B=47 Green Yellow
No.23 40 25 50 R=255, G=182, B=193 R=255, G=182, B=193 Light Pink
No.24 40 25 70 R=128, G=0, B=128 R=128, G=0, B=128 Purple
No.25 40 40 30 R=0, G=255, B=255 R=0, G=255, B=255 Cyan
No.26 40 40 50 R=255, G=0, B=255 R=255, G=0, B=255 Magenta
No.27 40 40 70 R=0, G=0, B=255 R=0, G=0, B=255 Blue


In this study, three input and three output variables were used in the structure of the multi-layer neural network, as shown in Fig. 2. The neurons were composed of three input, 10 hidden, and three output layer neurons, and were designed as i = 3, j = 10, and k = 3. We conducted a computer simulation designed to output the desired R (red), G (green), and B (blue) values according to the input values of illuminance, temperature, and humidity. The R (red), G (green), and B (blue) values of the output were combined to express the LED colors.

The desired R (red), G (green), and B (blue) values according to the illuminance, temperature, and humidity values provided to the designed input values of the neurocontroller are listed in Table 1. This is used as learning data for the neurocontroller.

The crisp and neuro-control output values under the same conditions are listed in Table 2. As shown in the table, the output value is the same under the same input conditions as the crisp and neuro-control output values.

The resulting values derived from the simulation of the neurocontroller are listed in Table 3. As shown in Table 3, the neurocontroller even controls any input value other than a predetermined value with the learned data, indicating the most appropriate output. However, crisp control cannot be output if the illumination, temperature, and humidity values are ambiguous values other than those listed in Table 1.

Neural output results according to the input value of arbitrary illumination and temperature and humidity

Illuminance (lux) Temperature (°C) Humidity (%) Neural Output LED Color
No.1 38 20 62 R=57, G=238, B=225 Cyan
No.2 17 10 42 R=125, G=21, B=165 Purple
No.3 12 16 38 R=133, G=19, B=125 Purple
No.4 29 13 36 R=235, G=124, B=225 Violet
No.5 27 28 33 R=233, G=169, B=143 Light Pink
No.6 30 29 69 R=32, G=175, B=225 Dodger Blue
No.7 18 15 44 R=225, G=211, B=36 Yellow
No.8 20 12 42 R=147, G=225, B=25 Green Yellow
No.9 28 23 35 R=251, G=66, B=253 Magenta
No.10 15 17 43 R=225, G=214, B=25 Yellow
No.11 30 15 35 R=240, G=134, B=205 Violet
No.12 25 18 44 R=107, G=230, B=25 Green Yellow
No.13 31 13 37 R=226, G=147, B=188 Light Pink
No.14 38 22 36 R=223, G=169, B=143 Light Pink
No.15 24 14 40 R=226, G=205, B=15 Orange
No.16 10 25 43 R=125, G=200, B=20 Green Yellow
No.17 36 26 64 R=85, G=208, B=225 Cyan
No.18 38 25 65 R=89, G=212, B=225 Cyan
No.19 36 25 66 R=85, G=208, B=225 Cyan
No.20 19 20 41 R=216, G=225, B=25 Yellow
No.22 40 27 70 R=57, G=149, B=225 Dodger Blue
No.23 10 22 42 R=222, G=213, B=25 Yellow
No.24 18 11 35 R=216, G=147, B=188 Light Pink
No.25 20 13 44 R=212, G=218, B=25 Yellow
No.26 35 22 63 R=88, G=215, B=220 Cyan
No.27 16 21 40 R=206, G=225, B=25 Yellow
No.28 40 19 35 R=216, G=147, B=158 Light Pink
No.29 17 10 42 R=137, G=225, B=15 Green Yellow
No.30 39 20 34 R=241, G=56, B=233 Magenta


The output of the neurocontroller was determined by learning the various input variables and outputs. Even with the same input value, the output value may vary depending on the output data of experts and designers in the area learned. Unlike crisp logic, storing an input value of a large amount of data is unnecessary and it has the advantage of being controlled to a desired value by simply learning with learning data. Owing to these characteristics, LED lighting control through the neuro-control system is a more organic and efficient system compared with general LED lighting control.

In this study, several scenarios required for landscape lighting were constructed and a large-capacity general-purpose multi-functional controller to operate the scenario was designed and implemented to validate the operation.

The hardware of the control system comprised an AVR control, LED module output, LED control, scenario selection switch, and operating speed display units. It was manufactured as a 13-channel device. The CPU used was ATmega128 and an FET was used to control the current signal. To operate the CPU, 12 V DC was converted into 5 V DC using a 7805 regulator.

In addition, a computer simulation was conducted by designing a control system to represent the most appropriate color according to the input values of the temperature, illuminance, and humidity using the neuro-control system. Consequently, from the results and output colors according to neuro-control, unlike the existing crisp-logic, neuro-control does not require the storage of many data inputs because of the characteristics of artificial intelligence and can control the desired value by learning with learning data. Therefore, LED lighting control through the neuro-control system is a more organic and efficient system compared with the general LED lighting control.

In future studies, the use of the low-power MSP432 MCU from Texas Instruments, which is more advanced than Atmega128, will lead to better performance with the emerging neural network’s machine learning technology.

  1. S. Li and A. Pandharipande and F. M. J. Willems, Unidirectional visible light communication and illumination with LEDs, IEEE Sensors Journal, vol. 16, no. 23, pp. 8617-8626, Dec., 2016. DOI: 10.1109/JSEN.2016.2614968.
    CrossRef
  2. C. Yao, and Z. Guo, and G. Long, and H. Zhang, Performance comparison among ASK, FSK, and DPSK in visible light communication, in Optics and Photonics Journal, vol. 6, no. 8B, pp. 150-154, Aug., 2016. DOI: 10.4236/opj.2016.68B025.
    CrossRef
  3. B. H. Jeong, and N. O. Kim, and D. G. Kim, and G. G. Oh, and G. B. Cho, and K. Y. Lee, Analysis of property for white and RGB multichip LED luminaire, Journal of the Korean Institute of Illuminating and Electrical Installation Engineers, vol. 23, no. 12, pp. 23-30, Dec., 2009. DOI: 10.5207/JIEIE.2009.23.12.023.
    CrossRef
  4. C. D. Wrege and P. J. Gordon and R. A. Greenwood, Electric lamp renewal systems: a strategy to dominate lighting, in Journal of Historial Research in Marketing, vol. 6, no. 4, pp. 485-500, Nov., 2014. DOI: 10.1108/JHRM-07-2013-0046.
    CrossRef
  5. Q. D. Ou, and L. Zhou, and Y. Q. Li, and S. Shen, and J. D. Chen, and C. Li, and Q. K. Wang, and S. T. Lee, and J. X. Tang, Extremely efficient white organic light‐emitting diodes for general lighting, Advanced Functional Materials, vol. 24, no. 46, pp. 7249-7256, Dec., 2014. DOI: 10.1002/adfm.201402026.
    CrossRef
  6. S. Nakamura, and M. R. Krames, History of gallium-nitride-based light-emitting diodes for illumination, Proceeding of the IEEE, vol. 101, no. 10, pp. 2211-2220, Oct., 2013. DOI: 10.1109/JPROC.2013.2274929.
    CrossRef
  7. E. Alba and J. F. Aldana and J. M. Troya, Genetic algorithms as heuristics for optimizing ANN design, in in International Conference on Artificial Neural Nets and Genetic Algorithms, Innsbruck, Austria, pp. 683-689, 1993. DOI: 10.1007/978-3-7091-7533-0_99.
    CrossRef
  8. J. Schmidt-Hieber, Nonparametric regression using deep neural networks with ReLU activation function, Annals of Statistics, vol. 48, no. 4, pp. 1875-1897, Aug., 2020. DOI: 10.1214/19-AOS1875.
    CrossRef
  9. B. G. Farley, and W. A. Clark, Simulation of self-organizing systems by digital computer, Transactions of the IRE Professional Group on Information Theory, vol. 4, no. 4, pp. 76-84, Sep., 1954. DOI:10.1109/TIT.1954.1057468.
    CrossRef
  10. F. Rosenblatt, The perceptron: A probabilistic model for information storage and organization in the brain, , Psychological Review, pp. 386-408, 1958. DOI: 10.1037/h0042519.
    Pubmed CrossRef
  11. M Zadkarami and M Shahbazian and K Salahshoor, Pipeline leakage detection and isolation: An integrated approach of statistical and wavelet feature extraction with multi-layer perceptron neural network (MLPNN), Journal of Loss Prevention in The Process Industries, vol. 43, pp. 479-487, Sep., 2016. DOI: 10.1016/j.jlp.2016.06.018.
    CrossRef
  12. F. B. Fitch, A logical calculus of ideas immanent in nervous activity, , vol. 9, no. 2, Journal of Symbolic Logic, pp. 49-50, 1944. DOI: 10.2307/2268029.
    CrossRef
  13. G. E. Hinton and S. Osindero and Y. W. Teh, A fast learning algorithm for deep belief nets, Neural Computation, vol. 18, no. 7, pp. 1527-1554, Jul., 2006. DOI: 10.1162/neco.2006.18.7.1527.
    Pubmed CrossRef

Jae-Kyung Lee

received the B.S. degree in Electronics & Communication Engineering from Hanyang University, Korea, in 2012, and received the M.S. degree in Electronics & Communication Engineering from Korea Maritime & Ocean University, Korea, in 2020. His research interests include electronics and communication engineering with industrial applications.


Jae-Hong Yim

received the B.S. degree from Sogang University, and M.S. degree and Ph.D. from Hanyang University, Korea, in 1986, 1988, and 1995, respectively, in Electronic Engineering. His research interests include computer networks and embedded systems.


Article

Regular paper

Journal of information and communication convergence engineering 2023; 21(1): 45-53

Published online March 31, 2023 https://doi.org/10.56977/jicce.2023.21.1.45

Copyright © Korea Institute of Information and Communication Engineering.

Implementation of a Sightseeing Multi-function Controller Using Neural Networks

Jae-Kyung Lee and Jae-Hong Yim*

Department of Electronics and Communication Engineering, College of Engineering, Korea Maritime and Ocean University, Busan, 49112, Korea

Correspondence to:Jae-Hong Yim (E-mail: jhyim@kmou.ac.kr, Tel: +82-51-410-4318)
Department of Electronics and Communication Engineering, College of Engineering, Korea Maritime and Ocean University, Busan, 49112, Republic of Korea

Received: May 4, 2022; Revised: January 8, 2023; Accepted: January 19, 2023

This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/3.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

This study constructs various scenarios required for landscape lighting; furthermore, a large-capacity general-purpose multi-functional controller is designed and implemented to validate the operation of the various scenarios. The multi-functional controller is a large-capacity general-purpose controller composed of a drive and control unit that controls the scenarios and colors of LED modules and an LED display unit. In addition, we conduct a computer simulation by designing a control system to represent the most appropriate color according to the input values of the temperature, illuminance, and humidity, using the neuro-control system. Consequently, when examining the result and output color according to neuro-control, unlike existing crisp logic, neuro-control does not require the storage of many data inputs because of the characteristics of artificial intelligence; the desired value can be controlled by learning with learning data.

Keywords: Back-propagation, Learning, LED lighting, Multi-function, Neural network

I. INTRODUCTION

Numerous support for landscape lighting is required in the future to create a smart city; however, owing to the lack of technology and the burden of designing a scenario-creation program, this is presently impossible. Therefore, this study designs and implements a large-capacity general-purpose landscape lighting multifunction control system to be easily used by a landscape lighting installer without requiring a special control system design and program configuration. Furthermore, this system is designed to be compatible with both 220 and 110 V AC worldwide. Based on local and foreign market surveys, 25 scenarios and 16-color high-capacity artificial controls can be used to create high-capacity, intelligence- driven, multi-color programs. The load capacity for landscape lighting was designed and manufactured such that the landscape lighting project could be easily connected and used in a large capacity of 3,000 W [1].

Artificial intelligence was added to the program to create the most appropriate emotional lighting suitable for the atmosphere; a neuro-control system was used to implement artificial intelligence.

We plan to install temperature and illumination sensors on the system semiconductor board to adapt to the atmosphere, that is, hot, cold, bright, and dark environments, as well as develop it as a better function after market research [2].

Existing landscape lighting controllers do not have various landscape scenarios, are simple, and are unsuitable for smart landscape lighting; Therefore, this study constructs the functions of various scenarios and adds neural networks to make the color smart.

The controller was configured, implemented, and a computer simulation was performed.

Previous studies on lighting control systems used conventional control; however, this study investigates a neural network and compares it with the actual experimental production operation and computer simulation. Consequently, unlike conventional control, neural networks must not store many data input values and are simply learned from the learning data to control the desired values. Therefore LED lighting control through artificial intelligence is a more organic and efficient system than conventional LED lighting control [3].

II. CHARACTERISTICS OF LED LIGHTING

In the 20th century, advances in semiconductor technology invented a technology that emits light from magic stones. With the recent development of semiconductor light-emitting diodes (LEDs) that are sufficiently bright for lighting, a novel solid-state lighting technology has emerged.

This technology rapidly penetrates the existing lighting device field by applying the unique light-emitting characteristics of LED light sources, and this technology may be widely used in general fluorescent lighting devices in the future. The evolution of light sources for lighting is shown in Fig. 1 [4].

Figure 1. Brief history of lighting.

The main lighting characteristics of the LED light sources are summarized as follows. Structurally, unlike conventional light sources, LED lights are solid-structured small-point light sources, which do not use glass electrodes, filaments, or mercury (Hg); therefore, they are solid, have a long life, and are environmentally friendly. Accordingly, the lighting technology that uses LEDs is called semiconductor lighting technology, which uses a solid-structured light source, unlike conventional lighting technologies. Optically vivid monochromatic light emits poor colorability, light loss is small, visibility is improved when applied to lighting fixtures requiring particular colors (or wavelengths), and lamp loss can be significantly decreased as a directional light source. In addition, it has a better dimming control ability compared with existing light sources; thus, it can easily emit various colors [5].

Electrically, a DC driving light source (AC is also possible owing to diode characteristics) starts lighting above a specific voltage. After lighting, the current and light intensity change sensitively even with a small voltage change. In addition, because the rated voltage changes depending on the ambient temperature, the environmental adaptation characteristics become poor when driving at a constant voltage, and in principle, it is driven by a constant current source. Accordingly, to safely turn on an LED lighting device, a dedicated power supply device (ballast) suitable for the characteristics of the LED lamp is required. Environmentally, when the temperature increases, the allowable current and light power decrease, a large quantity of heat is generated, and the dynamic characteristics change sensitively to changes in ambient and operating temperatures. Suppose a current that exceeds the allowable current flows. In that case, the life is significantly reduced and the performance is significantly degraded; therefore, appropriate heat treatment technology is required in addition to a dedicated power supply.

An electrical dynamic characteristic LED has a unique characteristic. The electrical polarity matches owing to the characteristics of the diode, the current increases rapidly above a certain voltage, and the brightness is directly proportional to the current. The rated driving voltage of a single LED varies according to the light-emitting color (semiconductor type) and slightly at ambient temperature. Generally, they operate at low voltages of 2-4 V [6].

In thermal dynamic LEDs, the light output and efficiency increase with a decrease in the temperature of the junction part, even if the current flow is constant. This is unlike conventional light sources (incandescent and fluorescent lamps). Therefore, the higher the temperature, the lower the light output and light efficiency; if necessary, the heat generated from the joint must be properly dissipated to improve the lighting performance.

An LED is a solid light-emitting device without filaments and bulbs. Suppose an appropriate power supply and radiator are used. In that case, it can be used for more than 100,000 h to maintain the lighting state without burnout. Accordingly, LEDs are sometimes referred to as semi-permanent light sources. However, all light sources gradually lose their light output with time, which is less visible to humans to 80% of their initial light intensity. By this standard, the lifetime of the LED is currently estimated to be approximately 40,000 to 50,000 h. Therefore, compared with the lifetimes of 1,500 h and 10,000 h for incandescent bulbs and fluorescent lamps, respectively, LEDs can be considered a long-life light source with an extremely long life.

Because monochromatic light emission in a narrow wavelength band and high-visibility LEDs emit monochromatic light in a narrow wavelength band determined by the semiconductor type, excellent lighting performance and effective light emission efficiency can be expected when applied to lighting equipment requiring particular colors. For example, a traffic light using a 15 lm/W incandescent bulb has a red transmittance of approximately 10% and a light emission efficiency of 1.5 lm/W, whereas an LED emits clear red itself by more than 30 lm/W; thus, energy savings of more than 90% are possible compared with the light bulb type. In addition, LED lighting is expected to reduce maintenance costs owing to long life and traffic accidents owing to improved visibility. Major applications include emotional lighting, traffic lights, aviation obstacles, emergency exits, and lighting equipment that requires particular colors.

III. Neural Networks

Conventional control theories that rely entirely on the mathematical models of control plants have shown limitations in dealing with such systems. Obtaining accurate mathematical models for most real-world dynamic systems is challenging because they are nonlinear and often time-varying with uncertain elements.

Neural networks are a part of artificial intelligence involved in the creation of systems similar to humans that can analyze situations on their own by accumulating knowledge and experience through learning. Neural networks aim to simplify the brain's biological neurons and their associations as well as model them mathematically to mimic the intelligent actions of the brain. Neural networks are conceptually simple and learn by organizing their internal structures for a given input [7].

Studies on neural networks were first conducted by McCulloch and Fitz in 1943. They considered the human brain as a calculator composed of numerous nerve cells. In addition, we showed a model that performs simple logical tasks, recognizing the significance of pattern classification for identifying intelligent human behavior. Hebb proposed the first learning rule to adjust the weight between two neurons; this study had a significant influence on the study of adaptive neural networks. In 1957, Rosenblatt published a neural model called the “perceptron” that enabled the study of practical neural networks. After Minsky and Papert et al. extensively analyzed the perceptron model mathematically and observed they could not solve simple nonlinear problems, such as the XOR function, research on neural networks was in recession for 20 years [8].

However, in the 1980s, neural networks were newly developed by Hopfield et al. The error backpropagation algorithm, the most commonly used learning algorithm for neural networks, was established by Werbos and Parker. Current neural networks, which have developed into this historical background, are widely applied to various fields, such as pattern recognition, voice recognition, control systems, medical diagnosis, and communication systems. First, this section examines the structures of neural networks, learning algorithms, and neural network models for pattern recognition and prediction to determine the use of neural networks in this study [9-10].

A. Structure and Learning of Multi-Layered Neural Networks

The human brain is connected by numerous neurons. Therefore, artificial neural networks similar to the human brain can be called multi-layered structures and can be performed with better performance by interconnected neurons. In general, larger networks provide a larger computational capacity. The arrangement of neurons in layers mimics the layered structures of the brain. The most commonly used neural network structure in applications, such as pattern and system recognition or control is a multi-layered neural network with error backpropagation algorithms. A typical multi-layered neural network is shown in Fig. 2 [11].

Figure 2. Multi-layer neural network.

Each circle in Fig. 2. is a neuron. This neural network consists of an input layer with an input vector of x and an output layer with an output vector of y; the layer between the input and output layer is referred to as a hidden layer.

In Fig. 2, Oι, Oj, Oκ are the outputs of each neuron of the input, hidden, and output layers; the weight between the input and hidden layers is denoted as Wji, whereas that between the hidden and output layers is denoted as Wkj. Information is stored in the weights of the neural networks and the components of the weights, Wji and Wkj are constantly replaced with new information during the learning process. In general, the representative algorithm of a neural network that changes new information is the error backpropagation algorithm. The error backpropagation algorithm is a least-mean square that self-subtracts and minimizes the error between the output of the final output layer neuron and the desired output calculated by each neuron in the neural network [12-13].

First, the error backpropagation learning algorithm sends the input x of the neural network to the hidden layer in the input layer. Second, neurons in the hidden layer add the product of the weights from each input layer and send the result calculated through the activation function to the output layer. The output layer outputs the same neuronal operations as the hidden layer. Currently, the output value of the neural network differs from the desired target value and this difference is called an error. To minimize this error, the weight is adjusted by partial differentiation of the error vector term of the weight in each layer. That is, after calculating the error of the output layer and the desired target value, the weight is adjusted by the amount of weight change owing to the error caused by backpropagation from the output to the hidden layer, called error backpropagation.

The error backpropagation learning algorithm is mathematically represented as follows.

First, the neuron outputs of the input, hidden, and output layers are as shown in (2), (4), and (6).

n e t i = x i i = 1 , 2 , 3 , , n O i = λ f n e t i n e t i = i W j i O i O i j = λ f n e t j n e t k = k W k i O i O k = λ f n e t k

where f is the activation function, net is the product of the previous neuron output and the weights in the current layer, and λ is the slope of the activation function. To learn the neural network, the error should be obtained, which is the difference between the output value of the neural network and the desired target value; this error is obtained as shown in (7).

E = 1 2 k D k O k

Because learning aims to minimize the error E by adjusting the weight, the weight is changed in a negative gradient direction to minimize errors. Therefore, the weight change may be obtained by partial differentiation of the weight direction vector with respect to the error in the negatively inclined direction. The weight change in each layer is expressed as follows:

Δ W k i = η E W k i , η < C

where η is a constant representing the learning rate. In addition, (8) can be expressed using the chain rule as follows:

E W k i = E O k O k n e t k n e t k W k i = 1 2 D k O k 2 O k f n e t k n e t k W k i O i W k i

Suppose the activation function is linear. In that case, the weight change is expressed as follows:

Δ W k i = η D k O k O i

The weight change with respect to the hidden layer was also changed in a negative gradient direction to minimize errors.

Δ W j i = η E W j i , η < C

Using the chain rule, (11) can be written as follows:

E W j i = E O k O k n e t k n e t k O j O j W j i = 1 2 D k O k 2 O k f n e t k n e t k W k i O i W k i W j i O i W j i

Using (12), the amount of change in the weight between the input and hidden layers is expressed as follows:

Δ W j i = η D k O k W k i O i

Therefore, the changes in weight are as follows.

W j i = W j i + Δ W j W k i = W k i + Δ w k

As previously explained, error backpropagation algorithms require desired response values to calculate error signals and adjust the weights of neural networks. After the initial learning, the neural network enters a new set of data that has not been used for learning.

The accuracy of a network with data other than a set of learned data provides the neural network with the ability to generalize. This refers to the reliability of neural networks. After the learning and testing steps, neural networks can be used to model pattern classifiers, unknown nonlinear functions, and complex processes.

When learning neural networks, initial weights are set to small random values and because this initialization influences the final output, the initial weights typically use a value between −0.5 and 0.5. In addition, the degree of convergence of the error backpropagation algorithm may vary depending on the learning rate. The learning rate is selected differently depending on the structure and application of the neural network. No fixed criterion exists (a value generally ranging from 0 to 1). Suppose a large learning rate is used. In that case, overshoots may occur and the learning speed may decrease if a small learning rate is used; therefore, it is appropriately selected within the aforementioned range.

This study uses three input variables and three output variables in the structure of the multi-layer neural network, as shown in Fig. 2. The neurons were composed of three input, 10 hidden, and three output layer neurons, and were designed as i = 3, j = 10, and k = 3. Here, a computer simulation was conducted by designing the desired output R, G, and B values according to the input values of illuminance, temperature, and humidity. In addition, the R, G, and B values of the output were combined to express the LED color.

IV. DESIGN AND IMPLEMENTATION OF A MULTIFUNCTION CONTROLLER

The internal configuration of the LED lighting control system for landscape lighting comprises a power supply unit, an AVR control unit, a CLCD output section unit, and an LED control unit.

A. Power Supply Unit

A switched-mode power supply (SMPS), a device that converts DC voltage into a square wave voltage using IC devices, such as power transistors, and outputs DC voltage after smoothing using a filter was used to apply power to the LED lighting control board. The maximum voltage used in the control board was 12 V DC; thus, an AC/DC converter was used to convert 220 V AC to 12 V DC.

B. AVR Control Unit

The MCU of this system used the ATmega128 model, which is an 8-bit RISC microcontroller from Atmel.

The AVR control unit receives the ADC values of the illuminance and temperature sensors; the humidity sensor controls the color of each RGB LED module. Color control of the RGB LED DRIVE was performed through a timer/ counter function. Utilizing this function, we created a pulsewidth modulation (PWM) output to adjust the luminance ratio of red, green, and blue. A 16-bit counter was used as the timer/counter. Among the various modes, such as FAST PWM and CTC modes, the CTC mode was used. During counting, the CTC mode continuously compared the counting and OCR values and outputted a matching signal when the counting and OCR values were equal, thereby outputting a pulse waveform to the waveform generator.

C. CLCD Output Section Unit

The LCD used was a typical 16 × 4 line character LCD (CLCD). It was implemented such that the PWM values of red, green, and blue currently output through the CLCD can be checked in real time.

D. LED control unit

An RGB LED module was used to provide the calculated value in the RGB color. Because the output value from the MCU was 5 V DC, which is the TTL voltage level, the RGB LED module could not be driven. The driving voltage of the RGB LED module was 12 V DC, which was directly applied by the SMPS unit (the power supply unit). The high and low outputs from the MCU were connected to the gate of the MOSFET and served to switch the RGB LED module on and off. The color was expressed by applying power to the module according to the signal from the gate end.

The RGB LED module had four lines, R, G, B, and COM, and the MOSFET was connected to the other three lines, except COM, to control the output color.

A large-capacity general-purpose multi-function controller for landscape lighting was constructed, as shown in Fig. 3. The scenario actions, such as “switching 16-color panorama program”, “16-color program with dimming”, and “16-color program with 3 flashes”. The operation of each scenario can be selected ON/OFF the selection switch and the operation speed of the operation scenario was adjusted using a separate speed control switch from the slowest 0 to the fastest 9. The speed is expressed numerically in the 7-segment. The picture of large-capacity general-purpose multi-functional controllers for landscape lighting is shown in Fig. 3.

Figure 3. Multi-function controller.

In the discussion after the experiment, we configured the sightseeing multi-function controller that was designed and manufactured, as shown in Fig. 3. All the configured scenarios worked well.

Using the multi-function and neural networks, the actual configuration of a smart landscape lighting controller comprised 25 types of scenarios and 16 color expressions. The simulation results are listed in Tables 1, 2, and 3. In addition, the input voltage of the controller was designed for both 12 and 24 V DC.

R, G, and B output values according to the illuminance, temperature, and humidity input (learning data).

Illuminance (lux) Temperature (°C) Humidity (%) R, G, B (OUT) LED Color
No. 1 10 10 30 R=255, G=0, B=0 Red
No. 2 10 10 50 R=224, G=255, B=255 Light Cyan
No. 3 10 10 70 R=255, G=165, B=0 Orange
No. 4 10 25 30 R=173, G=255, B=47 Green Yellow
No. 5 10 25 50 R=255, G=255, B=0 Yellow
No. 6 10 25 70 R=30, G=144, B=255 Dodger Blue
No. 7 10 40 30 R=0, G=255, B=255 Cyan
No. 8 10 40 50 R=224, G=255, B=255 Light Cyan
No. 9 10 40 70 R=238, G=130, B=238 Violet
No. 10 25 10 30 R=255, G=182, B=193 Light Pink
No. 11 25 10 50 R=128, G=0, B=128 Purple
No. 12 25 10 70 R=255, G=182, B=193 Light Pink
No. 13 25 25 30 R=255, G=255, B=0 Yellow
No. 14 25 25 50 R=0, G=255, B=0 Green
No. 15 25 25 70 R=30, G=144, B=255 Dodger Blue
No. 16 25 40 30 R=255, G=0, B=255 Magenta
No. 17 25 40 50 R=238, G=130, B=238 Violet
No. 18 25 40 70 R=173, G=255, B=47 Green Yellow
No. 19 40 10 30 R=255, G=255, B=0 Yellow
No. 20 40 10 50 R=255, G=165, B=0 Orange
No. 21 40 10 70 R=0, G=255, B=255 Cyan
No. 22 40 25 30 R=173, G=255, B=47 Green Yellow
No. 23 40 25 50 R=255, G=182, B=193 Light Pink
No. 24 40 25 70 R=128, G=0, B=128 Purple
No. 25 40 40 30 R=0, G=255, B=255 Cyan
No. 26 40 40 50 R=255, G=0, B=255 Magenta
No. 27 40 40 70 R=0, G=0, B=255 Blue


Comparison of crisp-neural output results under the same conditions.

Illuminance (lux) Temperature (°C) Humidity (%) Crisp Output Neural Output LED Color
No.1 10 10 30 R=255, G=0, B=0 R=255, G=0, B=0 Red
No.2 10 10 50 R=224, G=255, B=255 R=224, G=255, B=255 Light Cyan
No.3 10 10 70 R=255, G=165, B=0 R=255, G=165, B=0 Orange
No.4 10 25 30 R=173, G=255, B=47 R=173, G=255, B=47 Green Yellow
No.5 10 25 50 R=255, G=255, B=0 R=255, G=255, B=0 Yellow
No.6 10 25 70 R=30, G=144, B=255 R=30, G=144, B=255 Dodger Blue
No.7 10 40 30 R=0, G=255, B=255 R=0, G=255, B=255 Cyan
No.8 10 40 50 R=224, G=255, B=255 R=224, G=255, B=255 Light Cyan
No.9 10 40 70 R=238, G=130, B=238 R=238, G=130, B=238 Violet
No.10 25 10 30 R=255, G=182, B=193 R=255, G=182, B=193 Light Pink
No.11 25 10 50 R=128, G=0, B=128 R=128, G=0, B=128 Purple
No.12 25 10 70 R=255, G=182, B=193 R=255, G=182, B=193 Light Pink
No.13 25 25 30 R=255, G=255, B=0 R=255, G=255, B=0 Yellow
No.14 25 25 50 R=0, G=255, B=0 R=0, G=255, B=0 Green
No.15 25 25 70 R=30, G=144, B=255 R=30, G=144, B=255 Dodger Blue
No.16 25 40 30 R=255, G=0, B=255 R=255, G=0, B=255 Magenta
No.17 25 40 50 R=238, G=130, B=238 R=238, G=130, B=238 Violet
No.18 25 40 70 R=173, G=255, B=47 R=173, G=255, B=47 Green Yellow
No.19 40 10 30 R=255, G=255, B=0 R=255, G=255, B=0 Yellow
No.20 40 10 50 R=255, G=165, B=0 R=255, G=165, B=0 Orange
No.21 40 10 70 R=0, G=255, B=255 R=0, G=255, B=255 Cyan
No.22 40 25 30 R=173, G=255, B=47 R=173, G=255, B=47 Green Yellow
No.23 40 25 50 R=255, G=182, B=193 R=255, G=182, B=193 Light Pink
No.24 40 25 70 R=128, G=0, B=128 R=128, G=0, B=128 Purple
No.25 40 40 30 R=0, G=255, B=255 R=0, G=255, B=255 Cyan
No.26 40 40 50 R=255, G=0, B=255 R=255, G=0, B=255 Magenta
No.27 40 40 70 R=0, G=0, B=255 R=0, G=0, B=255 Blue


V. COMPUTER SIMULATION

In this study, three input and three output variables were used in the structure of the multi-layer neural network, as shown in Fig. 2. The neurons were composed of three input, 10 hidden, and three output layer neurons, and were designed as i = 3, j = 10, and k = 3. We conducted a computer simulation designed to output the desired R (red), G (green), and B (blue) values according to the input values of illuminance, temperature, and humidity. The R (red), G (green), and B (blue) values of the output were combined to express the LED colors.

The desired R (red), G (green), and B (blue) values according to the illuminance, temperature, and humidity values provided to the designed input values of the neurocontroller are listed in Table 1. This is used as learning data for the neurocontroller.

The crisp and neuro-control output values under the same conditions are listed in Table 2. As shown in the table, the output value is the same under the same input conditions as the crisp and neuro-control output values.

The resulting values derived from the simulation of the neurocontroller are listed in Table 3. As shown in Table 3, the neurocontroller even controls any input value other than a predetermined value with the learned data, indicating the most appropriate output. However, crisp control cannot be output if the illumination, temperature, and humidity values are ambiguous values other than those listed in Table 1.

Neural output results according to the input value of arbitrary illumination and temperature and humidity.

Illuminance (lux) Temperature (°C) Humidity (%) Neural Output LED Color
No.1 38 20 62 R=57, G=238, B=225 Cyan
No.2 17 10 42 R=125, G=21, B=165 Purple
No.3 12 16 38 R=133, G=19, B=125 Purple
No.4 29 13 36 R=235, G=124, B=225 Violet
No.5 27 28 33 R=233, G=169, B=143 Light Pink
No.6 30 29 69 R=32, G=175, B=225 Dodger Blue
No.7 18 15 44 R=225, G=211, B=36 Yellow
No.8 20 12 42 R=147, G=225, B=25 Green Yellow
No.9 28 23 35 R=251, G=66, B=253 Magenta
No.10 15 17 43 R=225, G=214, B=25 Yellow
No.11 30 15 35 R=240, G=134, B=205 Violet
No.12 25 18 44 R=107, G=230, B=25 Green Yellow
No.13 31 13 37 R=226, G=147, B=188 Light Pink
No.14 38 22 36 R=223, G=169, B=143 Light Pink
No.15 24 14 40 R=226, G=205, B=15 Orange
No.16 10 25 43 R=125, G=200, B=20 Green Yellow
No.17 36 26 64 R=85, G=208, B=225 Cyan
No.18 38 25 65 R=89, G=212, B=225 Cyan
No.19 36 25 66 R=85, G=208, B=225 Cyan
No.20 19 20 41 R=216, G=225, B=25 Yellow
No.22 40 27 70 R=57, G=149, B=225 Dodger Blue
No.23 10 22 42 R=222, G=213, B=25 Yellow
No.24 18 11 35 R=216, G=147, B=188 Light Pink
No.25 20 13 44 R=212, G=218, B=25 Yellow
No.26 35 22 63 R=88, G=215, B=220 Cyan
No.27 16 21 40 R=206, G=225, B=25 Yellow
No.28 40 19 35 R=216, G=147, B=158 Light Pink
No.29 17 10 42 R=137, G=225, B=15 Green Yellow
No.30 39 20 34 R=241, G=56, B=233 Magenta


The output of the neurocontroller was determined by learning the various input variables and outputs. Even with the same input value, the output value may vary depending on the output data of experts and designers in the area learned. Unlike crisp logic, storing an input value of a large amount of data is unnecessary and it has the advantage of being controlled to a desired value by simply learning with learning data. Owing to these characteristics, LED lighting control through the neuro-control system is a more organic and efficient system compared with general LED lighting control.

VI. Conclusion

In this study, several scenarios required for landscape lighting were constructed and a large-capacity general-purpose multi-functional controller to operate the scenario was designed and implemented to validate the operation.

The hardware of the control system comprised an AVR control, LED module output, LED control, scenario selection switch, and operating speed display units. It was manufactured as a 13-channel device. The CPU used was ATmega128 and an FET was used to control the current signal. To operate the CPU, 12 V DC was converted into 5 V DC using a 7805 regulator.

In addition, a computer simulation was conducted by designing a control system to represent the most appropriate color according to the input values of the temperature, illuminance, and humidity using the neuro-control system. Consequently, from the results and output colors according to neuro-control, unlike the existing crisp-logic, neuro-control does not require the storage of many data inputs because of the characteristics of artificial intelligence and can control the desired value by learning with learning data. Therefore, LED lighting control through the neuro-control system is a more organic and efficient system compared with the general LED lighting control.

In future studies, the use of the low-power MSP432 MCU from Texas Instruments, which is more advanced than Atmega128, will lead to better performance with the emerging neural network’s machine learning technology.

ACKNOWLEDGMENTS

We would like to thank Editage(www.editage.co.kr) for English language editing.

Fig 1.

Figure 1.Brief history of lighting.
Journal of Information and Communication Convergence Engineering 2023; 21: 45-53https://doi.org/10.56977/jicce.2023.21.1.45

Fig 2.

Figure 2.Multi-layer neural network.
Journal of Information and Communication Convergence Engineering 2023; 21: 45-53https://doi.org/10.56977/jicce.2023.21.1.45

Fig 3.

Figure 3.Multi-function controller.
Journal of Information and Communication Convergence Engineering 2023; 21: 45-53https://doi.org/10.56977/jicce.2023.21.1.45

R, G, and B output values according to the illuminance, temperature, and humidity input (learning data).

Illuminance (lux) Temperature (°C) Humidity (%) R, G, B (OUT) LED Color
No. 1 10 10 30 R=255, G=0, B=0 Red
No. 2 10 10 50 R=224, G=255, B=255 Light Cyan
No. 3 10 10 70 R=255, G=165, B=0 Orange
No. 4 10 25 30 R=173, G=255, B=47 Green Yellow
No. 5 10 25 50 R=255, G=255, B=0 Yellow
No. 6 10 25 70 R=30, G=144, B=255 Dodger Blue
No. 7 10 40 30 R=0, G=255, B=255 Cyan
No. 8 10 40 50 R=224, G=255, B=255 Light Cyan
No. 9 10 40 70 R=238, G=130, B=238 Violet
No. 10 25 10 30 R=255, G=182, B=193 Light Pink
No. 11 25 10 50 R=128, G=0, B=128 Purple
No. 12 25 10 70 R=255, G=182, B=193 Light Pink
No. 13 25 25 30 R=255, G=255, B=0 Yellow
No. 14 25 25 50 R=0, G=255, B=0 Green
No. 15 25 25 70 R=30, G=144, B=255 Dodger Blue
No. 16 25 40 30 R=255, G=0, B=255 Magenta
No. 17 25 40 50 R=238, G=130, B=238 Violet
No. 18 25 40 70 R=173, G=255, B=47 Green Yellow
No. 19 40 10 30 R=255, G=255, B=0 Yellow
No. 20 40 10 50 R=255, G=165, B=0 Orange
No. 21 40 10 70 R=0, G=255, B=255 Cyan
No. 22 40 25 30 R=173, G=255, B=47 Green Yellow
No. 23 40 25 50 R=255, G=182, B=193 Light Pink
No. 24 40 25 70 R=128, G=0, B=128 Purple
No. 25 40 40 30 R=0, G=255, B=255 Cyan
No. 26 40 40 50 R=255, G=0, B=255 Magenta
No. 27 40 40 70 R=0, G=0, B=255 Blue

Comparison of crisp-neural output results under the same conditions.

Illuminance (lux) Temperature (°C) Humidity (%) Crisp Output Neural Output LED Color
No.1 10 10 30 R=255, G=0, B=0 R=255, G=0, B=0 Red
No.2 10 10 50 R=224, G=255, B=255 R=224, G=255, B=255 Light Cyan
No.3 10 10 70 R=255, G=165, B=0 R=255, G=165, B=0 Orange
No.4 10 25 30 R=173, G=255, B=47 R=173, G=255, B=47 Green Yellow
No.5 10 25 50 R=255, G=255, B=0 R=255, G=255, B=0 Yellow
No.6 10 25 70 R=30, G=144, B=255 R=30, G=144, B=255 Dodger Blue
No.7 10 40 30 R=0, G=255, B=255 R=0, G=255, B=255 Cyan
No.8 10 40 50 R=224, G=255, B=255 R=224, G=255, B=255 Light Cyan
No.9 10 40 70 R=238, G=130, B=238 R=238, G=130, B=238 Violet
No.10 25 10 30 R=255, G=182, B=193 R=255, G=182, B=193 Light Pink
No.11 25 10 50 R=128, G=0, B=128 R=128, G=0, B=128 Purple
No.12 25 10 70 R=255, G=182, B=193 R=255, G=182, B=193 Light Pink
No.13 25 25 30 R=255, G=255, B=0 R=255, G=255, B=0 Yellow
No.14 25 25 50 R=0, G=255, B=0 R=0, G=255, B=0 Green
No.15 25 25 70 R=30, G=144, B=255 R=30, G=144, B=255 Dodger Blue
No.16 25 40 30 R=255, G=0, B=255 R=255, G=0, B=255 Magenta
No.17 25 40 50 R=238, G=130, B=238 R=238, G=130, B=238 Violet
No.18 25 40 70 R=173, G=255, B=47 R=173, G=255, B=47 Green Yellow
No.19 40 10 30 R=255, G=255, B=0 R=255, G=255, B=0 Yellow
No.20 40 10 50 R=255, G=165, B=0 R=255, G=165, B=0 Orange
No.21 40 10 70 R=0, G=255, B=255 R=0, G=255, B=255 Cyan
No.22 40 25 30 R=173, G=255, B=47 R=173, G=255, B=47 Green Yellow
No.23 40 25 50 R=255, G=182, B=193 R=255, G=182, B=193 Light Pink
No.24 40 25 70 R=128, G=0, B=128 R=128, G=0, B=128 Purple
No.25 40 40 30 R=0, G=255, B=255 R=0, G=255, B=255 Cyan
No.26 40 40 50 R=255, G=0, B=255 R=255, G=0, B=255 Magenta
No.27 40 40 70 R=0, G=0, B=255 R=0, G=0, B=255 Blue

Neural output results according to the input value of arbitrary illumination and temperature and humidity.

Illuminance (lux) Temperature (°C) Humidity (%) Neural Output LED Color
No.1 38 20 62 R=57, G=238, B=225 Cyan
No.2 17 10 42 R=125, G=21, B=165 Purple
No.3 12 16 38 R=133, G=19, B=125 Purple
No.4 29 13 36 R=235, G=124, B=225 Violet
No.5 27 28 33 R=233, G=169, B=143 Light Pink
No.6 30 29 69 R=32, G=175, B=225 Dodger Blue
No.7 18 15 44 R=225, G=211, B=36 Yellow
No.8 20 12 42 R=147, G=225, B=25 Green Yellow
No.9 28 23 35 R=251, G=66, B=253 Magenta
No.10 15 17 43 R=225, G=214, B=25 Yellow
No.11 30 15 35 R=240, G=134, B=205 Violet
No.12 25 18 44 R=107, G=230, B=25 Green Yellow
No.13 31 13 37 R=226, G=147, B=188 Light Pink
No.14 38 22 36 R=223, G=169, B=143 Light Pink
No.15 24 14 40 R=226, G=205, B=15 Orange
No.16 10 25 43 R=125, G=200, B=20 Green Yellow
No.17 36 26 64 R=85, G=208, B=225 Cyan
No.18 38 25 65 R=89, G=212, B=225 Cyan
No.19 36 25 66 R=85, G=208, B=225 Cyan
No.20 19 20 41 R=216, G=225, B=25 Yellow
No.22 40 27 70 R=57, G=149, B=225 Dodger Blue
No.23 10 22 42 R=222, G=213, B=25 Yellow
No.24 18 11 35 R=216, G=147, B=188 Light Pink
No.25 20 13 44 R=212, G=218, B=25 Yellow
No.26 35 22 63 R=88, G=215, B=220 Cyan
No.27 16 21 40 R=206, G=225, B=25 Yellow
No.28 40 19 35 R=216, G=147, B=158 Light Pink
No.29 17 10 42 R=137, G=225, B=15 Green Yellow
No.30 39 20 34 R=241, G=56, B=233 Magenta

References

  1. S. Li and A. Pandharipande and F. M. J. Willems, Unidirectional visible light communication and illumination with LEDs, IEEE Sensors Journal, vol. 16, no. 23, pp. 8617-8626, Dec., 2016. DOI: 10.1109/JSEN.2016.2614968.
    CrossRef
  2. C. Yao, and Z. Guo, and G. Long, and H. Zhang, Performance comparison among ASK, FSK, and DPSK in visible light communication, in Optics and Photonics Journal, vol. 6, no. 8B, pp. 150-154, Aug., 2016. DOI: 10.4236/opj.2016.68B025.
    CrossRef
  3. B. H. Jeong, and N. O. Kim, and D. G. Kim, and G. G. Oh, and G. B. Cho, and K. Y. Lee, Analysis of property for white and RGB multichip LED luminaire, Journal of the Korean Institute of Illuminating and Electrical Installation Engineers, vol. 23, no. 12, pp. 23-30, Dec., 2009. DOI: 10.5207/JIEIE.2009.23.12.023.
    CrossRef
  4. C. D. Wrege and P. J. Gordon and R. A. Greenwood, Electric lamp renewal systems: a strategy to dominate lighting, in Journal of Historial Research in Marketing, vol. 6, no. 4, pp. 485-500, Nov., 2014. DOI: 10.1108/JHRM-07-2013-0046.
    CrossRef
  5. Q. D. Ou, and L. Zhou, and Y. Q. Li, and S. Shen, and J. D. Chen, and C. Li, and Q. K. Wang, and S. T. Lee, and J. X. Tang, Extremely efficient white organic light‐emitting diodes for general lighting, Advanced Functional Materials, vol. 24, no. 46, pp. 7249-7256, Dec., 2014. DOI: 10.1002/adfm.201402026.
    CrossRef
  6. S. Nakamura, and M. R. Krames, History of gallium-nitride-based light-emitting diodes for illumination, Proceeding of the IEEE, vol. 101, no. 10, pp. 2211-2220, Oct., 2013. DOI: 10.1109/JPROC.2013.2274929.
    CrossRef
  7. E. Alba and J. F. Aldana and J. M. Troya, Genetic algorithms as heuristics for optimizing ANN design, in in International Conference on Artificial Neural Nets and Genetic Algorithms, Innsbruck, Austria, pp. 683-689, 1993. DOI: 10.1007/978-3-7091-7533-0_99.
    CrossRef
  8. J. Schmidt-Hieber, Nonparametric regression using deep neural networks with ReLU activation function, Annals of Statistics, vol. 48, no. 4, pp. 1875-1897, Aug., 2020. DOI: 10.1214/19-AOS1875.
    CrossRef
  9. B. G. Farley, and W. A. Clark, Simulation of self-organizing systems by digital computer, Transactions of the IRE Professional Group on Information Theory, vol. 4, no. 4, pp. 76-84, Sep., 1954. DOI:10.1109/TIT.1954.1057468.
    CrossRef
  10. F. Rosenblatt, The perceptron: A probabilistic model for information storage and organization in the brain, , Psychological Review, pp. 386-408, 1958. DOI: 10.1037/h0042519.
    Pubmed CrossRef
  11. M Zadkarami and M Shahbazian and K Salahshoor, Pipeline leakage detection and isolation: An integrated approach of statistical and wavelet feature extraction with multi-layer perceptron neural network (MLPNN), Journal of Loss Prevention in The Process Industries, vol. 43, pp. 479-487, Sep., 2016. DOI: 10.1016/j.jlp.2016.06.018.
    CrossRef
  12. F. B. Fitch, A logical calculus of ideas immanent in nervous activity, , vol. 9, no. 2, Journal of Symbolic Logic, pp. 49-50, 1944. DOI: 10.2307/2268029.
    CrossRef
  13. G. E. Hinton and S. Osindero and Y. W. Teh, A fast learning algorithm for deep belief nets, Neural Computation, vol. 18, no. 7, pp. 1527-1554, Jul., 2006. DOI: 10.1162/neco.2006.18.7.1527.
    Pubmed CrossRef
JICCE
Dec 31, 2023 Vol.21 No.4, pp. 261~358

Stats or Metrics

Share this article on

  • line

Related articles in JICCE

Journal of Information and Communication Convergence Engineering Jouranl of information and
communication convergence engineering
(J. Inf. Commun. Converg. Eng.)

eISSN 2234-8883
pISSN 2234-8255