Journal of information and communication convergence engineering 2022; 20(3): 212-218

Published online September 30, 2022

https://doi.org/10.56977/jicce.2022.20.3.212

© Korea Institute of Information and Communication Engineering

## A 4K-Capable Hardware Accelerator of Haze Removal Algorithm using Haze-relevant Features

Seungmin Lee and Bongsoon Kang* , Member, KIICE

Department of Electronics Engineering, Dong-A University, Busan 49315, Korea

Correspondence to : *Bongsoon Kang (E-mail: bongsoon@dau.ac.kr, Tel: +82-51-200-7703)
Department of Electronics Engineering, Dong-A University, Busan 49315, Korea

Received: January 3, 2022; Revised: January 3, 2022; Accepted: August 17, 2022

This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/3.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

The performance of vision-based intelligent systems, such as self-driving cars and unmanned aerial vehicles, is subject to weather conditions, notably the frequently encountered haze or fog. As a result, studies on haze removal have garnered increasing interest from academia and industry. This paper hereby presents a 4K-capable hardware implementation of an efficient haze removal algorithm with the following two improvements. First, the depth-dependent haze distribution is predicted using a linear model of four haze-relevant features, where the model parameters are obtained through maximum likelihood estimates. Second, the approximated quad-decomposition method is adopted to estimate the atmospheric light. Extensive experimental results then follow to verify the efficacy of the proposed algorithm against well-known benchmark methods. For real-time processing, this paper also presents a pipelined architecture comprised of customized macros, such as split multipliers, parallel dividers, and serial dividers. The implementation results demonstrated that the proposed hardware design can handle DCI 4K videos at 30.8 frames per second.

Keywords Field-programmable gate array, Hardware accelerator, Haze removal, Real-time processing

The industrial structure has been changing dramatically due to the Fourth Industrial Revolution (or Industry 4.0), which dominates the mass surveillance and autonomous driving industries. Vision-based intelligent systems, such as self-driving cars and unmanned aerial vehicles, are being rapidly developed. These life-critical systems adopt highlevel object recognition algorithms to sense their environment and operate without human involvement. However, as the performance of these algorithms is subject to weather conditions, poor visibility resulting from adverse weather can trigger a cascading failure that may lead to unfortunate consequences. Therefore, studies on visibility restoration are essential for autonomous vehicles. In this research direction, haze removal (or, equivalently, image dehazing) has garnered growing interest from researchers because haze is seemingly the most frequently encountered weather in practice. In this context, haze refers to the suspended aerosols in the atmosphere. The particle-particle collision of these aerosols and light photons causes the atmospheric scattering phenomenon, reducing the visibility of captured scenes and rendering haze removal research relevant to visibility restoration.

Haze removal algorithms are generally based on the simplified Koschmieder model [1], which describes hazy image formation as follows:

Ix=Jxtx+A1tx,

where I represents the input image, J the scene radiance, t the transmission map, A the atmospheric light, and x the pixel coordinates. Assuming that H and W are the image height and width, respectively, I, J, and A take on values in H×W×3, whereas tH×W. According to (1), recovering J is an ill-posed problem because I is the only observation. Thus, early attempts in haze removal solved this problem by using multiple input images. However, as it is burdensome to acquire such input data, researchers have shifted their interest to single-image haze removal.

According to a recent systematic review [2], this haze removal category can be further partitioned into three subcategories: image processing, machine learning, and deep learning. Concerning the first, the dark channel prior (DCP) proposed by He et al. [3] is typical. The DCP states that outdoor non-sky images exhibit an extremely dark channel, whose intensity approximates zero in local patches around all pixels. They then adopted computationally intensive soft matting to refine the transmission estimate. This method demonstrated good performance in general, but it substantially prolonged the execution time due to the inherent problem in soft matting. Also, it is subject to color distortion when the input image contains a broad sky or shady objects. These limitations brought a large room for improvements, and many follow-up studies have been proposed. For example, Kim et al. [4] reduced the computational complexity by using the modified hybrid median filter—equipped with excellent edge-preserving characteristics—to eliminate the refinement step. This elimination then favored a fast and efficient hardware implementation [4,5].

In the second subcategory, a typical work is the color attenuation prior (CAP) proposed by Zhu et al. [6]. The CAP was also discovered through extensive observations on outdoor images. It states that the scene depth is closely correlated with the difference between the saturation and the value. Zhu et al. [6] modeled this correlation using a linear model, whose parameters were estimated utilizing the maximum likelihood estimates (MLE). The CAP provides a fast and effective haze removal, albeit with color distortion and background noise. In a follow-up study, Ngo et al. [7] addressed these two problems using adaptive weighting and low-pass filtering.

Finally, deep-learning techniques, such as convolutional neural networks (CNNs) and generative adversarial networks (GANs), have also found their applications in haze removal. The pioneering work of Cai et al. [8] can be taken as a prime example. They proposed a well-performed three-layer CNN for estimating the transmission map from a single input image. In subsequent work, Li et al. [9] employed serial multiscale mapping to design a CNN that estimates and refines the transmission map from coarse to fine scales. Although deep-learning-based haze removal methods generally deliver satisfactory performance, they are subject to the domain-shift problem.

This paper presents a machine-learning-based method that improves the CAP by considering two new haze-relevant features in addition to the saturation and value. More precisely, we estimate the scene depth as a linear combination of local entropy, dark channel, saturation, and value. We then present a comparative evaluation with other state-of-the-art benchmark methods to verify the efficacy of the proposed haze removal algorithm. Furthermore, we demonstrate that the software implementation per se cannot satisfy real-time processing requirements. Consequently, we design a 4K-capable hardware accelerator that can handle 4K videos at 30.8 frames per second (fps).

The rest of this paper is structured as follows. Section 2 explores the haze-relevant features and describes the proposed algorithm in detail. Section 3 presents the comparative evaluation with benchmark algorithms, and Section 4 demonstrates the necessity of a hardware accelerator for real-time processing. After that, Section 5 provides a detailed description of the proposed hardware design and interprets the implementation results. Finally, Section 6 concludes the paper.

### A. Haze-relevant Features

Under the single image dehazing approach, most algorithms estimate the transmission map in two major steps: feature extraction and regression. On the one hand, these two are easily noticeable in image-processing and machine-learning- based methods. For example, He et al. [3] calculated the normalized dark channel (feature extraction) and subtracted it from unity (regression) to estimate the transmission map. On the other hand, deep learning-based methods usually introduce the multiscale mapping between these two steps to improve robustness against spatial variance in the input image. This observation demonstrates the fundamental importance of haze-relevant features in haze removal. Recently, Ngo et al. [10] explored and summarized the haze-relevant features hitherto reported in the literature. In addition, they also verified the correlation between those features and the haze distribution using representative hazy and haze-free image patches extracted from well-publicized datasets. Some of the verification results—corresponding to the saturation, value, dark channel, and local entropy— is illustrated in Fig. 1, where Figs. 1(c) and (d) are adopted from [10]. The normalized histograms demonstrate that feature values follow the normal distribution, where the means of the hazy and haze-free distributions are well separated. Also, based on the degree of overlap, it is observed that the dark channel exhibits the strongest correlation with haze distribution, followed by saturation, value, and local entropy.

Fig. 1. Normalized histograms of four haze-relevant features: (a) saturation, (b) value, (c) dark channel, and (d) local entropy.

Inspired by the work of Zhu et al. [6], we also utilize a linear model to estimate the transmission map from the saturation, value, dark channel, and local entropy. The reason for using two additional features comes from the observing normalized histograms in Fig. 1. It is conspicuous that each feature correlates with the haze distribution in a different way. In addition, there are currently no features with a perfect correlation. Saturation, value, dark channel, and local entropy do not represent the haze distribution in particular circumstances. The breakdown of the dark channel in sky regions or shady objects is a prime example. Therefore, using multiple features allows the mutual compensation for their failures. The sky region is haze-free in the previous example, but its all-channel high intensities result in high dark channel values. Based on the dark channel, the sky region is misclassified as densely hazy instead of haze-free. However, as this region is also textureless, its haze condition can be recognized using the local entropy. So, this example demonstrates that the local entropy can compensate for the failure of the dark channel in the sky region.

### B. Scene Depth Estimation

As discussed earlier, we improved the work of Zhu et al. [6] to estimate the scene depth from the saturation, value, dark channel, and local entropy using a linear model. This model is illustrated in (2), where d denotes the scene depth, f1 saturation, f2 value, f3 dark channel, and f4 local entropy. The corresponding parameters are θ1, θ2, θ3, θ4, while θ0 represents the bias. The variable ε denotes the model error, and we assume that it follows the normal distribution with zero mean and σ2 variance. According to the characteristics of the normal distribution, the scene depth is also normally distributed with (θ0 + θ1f1 + θ2f2 + θ3f3 + θ4f4) mean and σ2 variance.

dx=θ0+θ1f1+θ2f2+θ3f3+θ4f4+εx.

Subsequently, we leverage the MLE technique to determine the parameters that maximize the likelihood function [11], wherein the synthetic training dataset is prepared as follows. We utilize the 500IMG dataset [11] whose 500 constituent haze-free images are collected from free image-sharing services. Then, we employ the enhanced equidistribution [11] to create the random depth maps, which serve as the ground-truth references in the training dataset. We also draw the random atmospheric light—whose values range from 0.8 to 1—from the enhanced equidistribution. Given the scene depth, we use the following (3) to calculate the transmission map.

tx=expβscdx,

where βsc is normally one as the atmospheric scattering coefficient. Because the transmission map and atmospheric light are now available, we substitute these two into (1) to produce the hazy synthetic images, whose saturation, value, dark channel, and local entropy serve as the inputs in the training dataset.

We then apply the mini-batch gradient ascent algorithm [11] on the training dataset created above to estimate the parameters. The best estimates that we obtained are θ0 = −0.5570, θ1 = 1.5210, θ2 = 0.9042, θ3 = 0.7543, and θ4 = −0.3685. It is worth noting that this parameter estimation step is performed offline, so it does not affect the run-time of the proposed method.

### C. Atmospheric Light Estimation

Researchers usually adopted the atmospheric light estimation (ALE) method of He et al. [3], which locates the atmospheric light in the “most opaque” region. He et al. [3] defined those pixels whose dark channel value is within the top 0.1% of that region. Then, the pixel with the highest intensity in the red-green-blue color space was selected as the atmospheric light.

In a different approach, Tarel and Hautiere [12] assumed that the atmospheric light was pure white if the input image was correctly white-balanced. However, this ALE method and even that of He et al. [3] are prone to incorrect estimation when the input image contains bright objects, such as white cars or light bulbs. The quad-decomposition algorithm proposed by Park et al. [13] is a good alternative. The input image is now recursively partitioned into quarters based on the average luminance. This partition procedure can eliminate bright objects effectively because of their high contrast to the background. Nevertheless, as the partition requires many frame buffers, the quad-decomposition algorithm is inefficient in memory usage. Therefore, Ngo et al. [11] developed an approximated version that is free of frame buffers. So, in this study, we utilize the approximated quaddecomposition method to estimate atmospheric light.

After that, we substitute the estimates of transmission map and atmospheric light into (1) to recover the scene radiance. Finally, we adopt the adaptive tone remapping method of Cho et al. [14] to post-process the recovered image.

This section compares the performance of the proposed method against four benchmark algorithms, including those proposed by Tarel and Hautiere [12], Zhu et al. [6], Kim et al. [4], and Ngo et al. [7]. Henceforth, we refer to these four as Tarel, Zhu, Kim, and Ngo, respectively. For comparison, we employ three full-reference metrics: structural similarity (SSIM) [15], feature similarity extended to color images (FSIMc) [16], and tone-mapped image quality index (TMQI) [17]. These metrics take on values ranging from zero to unity, wherein higher values signify a better performance. Also, we use two real datasets (I-HAZE [18] and OHAZE [19]) that comprise 30 and 45 pairs of hazy and haze-free images, respectively. Table 1 shows the average SSIM, FSIMc, and TMQI scores on I-HAZE and O-HAZE datasets, and the best results are displayed in bold. It can be observed that the proposed algorithm is the best performing under SSIM and FSIMc, regardless of whether input images are indoor or outdoor. Additionally, the performance gap between the proposed method and Zhu is easily noticeable, attributed to the use of two new haze-relevant features. The saturation, value, dark channel, and local entropy can compensate for one another, boosting performance when saturation and value fail to represent the haze distribution. So, in general, the proposed algorithm can be considered superior to the four benchmark algorithms. Fig. 2 shows hazy images and corresponding dehazing results obtained from the four benchmark methods and the proposed algorithm. The first row shows the dehazing results of a hazy image from the IVC dataset [20], which consists of 25 real hazy images. This dataset was excluded from the quantitative evaluation because it does not contain ground-truth references. In the second and third rows, haze removal was performed on images from the I-HAZE and O-HAZE datasets, respectively. It can be observed that Tarel exhibits excellent performance, but color distortion arises in the sky region. Meanwhile, the results of Zhu hinder object recognition due to excessive haze removal. In the results of Kim, the performance is average, and color distortion also arises in the upper part of the IVC and O-HAZE images. Conversely, results of Ngo are satisfactory without visually unpleasant distortion. However, in the IVC and I-HAZE images, the dehazing power is too strong, leading to the occurrence of black pixels, as witnessed in the dog’s fur and the bottom of the sofa. Finally, the proposed method removes haze effectively and well-preserves the dog’s fur color. In addition, in the I-HAZE and O-HAZE images, the dehazing results are more satisfactory than those of the benchmark methods.

Average structural similarity (SSIM), feature similarity extended to color images (FSIMc), and tone-mapped image quality index (TMQI) scores on I-HAZE and O-HAZE. The Best results are displayed in bold.

DatasetI-HAZEO-HAZE
MethodSSIMFSIMcTMQISSIMFSIMcTMQI
Tarel0.72000.80550.77400.72630.77330.8416
Zhu0.68640.82520.75120.66470.77380.8118
Kim0.64240.78790.70260.47020.68690.6509
Ngo0.76000.84820.78920.73220.82190.8935
Proposed0.76420.86580.78780.73290.89200.8351

Fig. 2. Qualitative comparison with other haze removal methods on the IVC, I-HAZE, and O-HAZE datasets.

### IV. IMPORTANCE OF HARDWARE IMPLEMENTATION

For an image processing algorithm to be deployed in realworld systems, it should handle image data at a minimum rate of 25 fps or greater, depending on whether the color encoding standard is PAL or NTSC [21]. Therefore, we conducted a run-time comparison between several haze removal algorithms and tabulated the results in Table 2. The simulation environment is MATLAB R2019a , running on a host computer with Intel Core i9-9900K CPU, NVIDIA TITAN RTX GPU, and 64GB RAM. It can be observed from Table 3 that none of the algorithms can handle images in real-time. This finding suggests that hardware implementation is essential for coping well with the real-time processing requirement.

Run-time comparison of haze removal algorithms (in seconds) for three image sizes.

Size640 × 4801024 × 7684096 × 2160
Method
He12.6432.37470.21
Tarel0.280.769.02
Zhu0.220.556.39
Kim0.160.434.81
Ngo0.170.445.22
Proposed0.932.3226.95

Hardware implementation result of the proposed hardware design.

DeviceXc7z045-2ffg900
Slice Logic UtilizationAvailableUsedUtilization
Slice Register(#)437,20064,91814.85%
Slice LUT(#)218,60058,12626.59%
RAM36E1s5455810.64%
Minimum Period3.67 ns
Maximum Frequency272.48 MHz

* The EDA tool was supported by the IC Design Education Center (IDEC), Korea.

### V. HARDWARE IMPLEMENTATION FOR REAL-TIME PROCESSING

Fig. 3 presents the hardware architecture of the proposed method, which can be partitioned into memories, logic circuits, and arithmetic circuits. Two 1024 × 32-bit SPRAMs and three 256×8-bit SPRAMs are used for the atmospheric light estimation [11] and adaptive tone remapping [14]. Other memories are used as line memories for 5 × 5 filtering operations. So, it takes time seven image lines from input to output. In addition, logic circuits consist of 10 modules. The system controller in logic circuits is responsible for inputoutput operations of the image/video data. Saturation, value, dark channel, and local entropy are calculated in parallel in the 4-feature module. Furthermore, to improve the maximum frequency, we utilized split multipliers for large multiplications where operands’ word-length is greater than 16 bits.

Fig. 3. Hardware architecture of the proposed haze removal algorithm.

Table 3 summarizes the hardware implementation result in terms of slice registers, LUTs, RAM36E1s, and maximum frequency. Slice registers and LUTs represent the logic areas, whereas RAM36E1s represents the memory area. The proposed design used 64,918 registers, 58,126 LUTs, and 58 RAM36E1s, respectively. The fastest attainable frequency was 272.48 MHz. This information can be then used to obtain the maximum processing speed (MPS):

MPS=fmaxW+HBH+VB,

where fmax denotes the maximum frequency in Table 3; W and H denote the input image’s width and height, respectively; and HB and VB denote the horizontal and vertical blank periods. The hardware was implemented to minimize the number of blank periods corresponding to one pixel and one image line to increase the MPS. It demonstrates that the proposed design can process the DCI 4K video at 30.8 fps, satisfying the realtime processing requirement of 25 fps or greater.

Fig. 4 depicts the C/C++ platform and verification board for the real-world execution. The top and middle thirds of Fig. 4 belong to the platform, whereas the bottom third depicts the system-on-a-chip (SoC) board. Moreover, the upper part of the platform shows side-by-side input-output data for ease of performance verification. The platform control panel is responsible for providing input data to the SoC board.

Fig. 4. Hardware verification using a system-on-a-chip evaluation board.

Meanwhile, the algorithm control provides a convenient graphical user interface for configuring the hardware design running on the board. This C/C++ platform is a convenient means for verifying the real-time processing of the proposed hardware design.

A high-performance haze removal algorithm and its corresponding 4K-capable hardware accelerator were presented in this paper. We proposed using two new haze-relevant features (dark channel and local entropy) to estimate the transmission map, based on the observation that they can effectively compensate for the failures of the CAP. In addition, we adopted a frame-buffer-free version of the quaddecomposition algorithm to estimate atmospheric light to reduce hardware resources. We then provided extensive experimental results to demonstrate the superiority of the proposed method over benchmark algorithms. We also conducted a run-time comparison to show that the software implementation per se was insufficient for real-time processing. Therefore, we presented a 4K-capable hardware design that can handle DCI 4K videos at 30.8 fps, rendering the proposed algorithm highly relevant for high quality, highspeed real-time systems, such as autonomous cars and drones.

This research was funded by research funds from Dong-A University, Busan, Korea.

1. Z. Lee, and S. Shang, Visibility: How applicable is the century-old koschmieder model?, Journal of the Atmospheric Sciences, vol. 73, no. 11, pp. 4573-4581, Nov., 2016. DOI: 10.1175/JAS-D-16-0102.1.
2. D. Ngo, and S. Lee, and T. M. Ngo, and G. -D. Lee, and B. Kang, Visibility restoration: A systematic review and meta-analysis, Sensors, vol. 21, no. 8, p. 2625, Apr., 2021. DOI: 10.3390/s21082625.
3. K. He and J. Sun and X. Tang, Single image haze removal using dark channel prior, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33, no. 12, pp. 2341-2353, Dec., 2011. Dec. 2011. DOI: 10.1109/TPAMI.2010.168.
4. G. -J. Kim and S. Lee and B. Kang, Single image haze removal using hazy particle maps, IEICE Transactions on Fundamentals of Electronics Communications and Computer Sciences, vol. E101-A, no. 11, pp. 1999-2002, Nov., 2018. DOI: 10.1587/transfun.E101.A.1999.
5. D. Ngo and G. -D. Lee and B. Kang, A 4K-capable FPGA implementation of single image haze removal using hazy particle maps, Applied Sciences, vol. 9, no. 17, p. 3443, Aug., 2019. DOI: 10.3390/app9173443.
6. Q. Zhu and J. Mai and L. Shao, A fast single image haze removal algorithm using color attenuation prior, IEEE Transactions on Image Processing, vol. 24, no. 11, pp. 3522-3533, Nov., 2015. DOI: 10.1109/TIP.2015.2446191.
7. D. Ngo and G. -D. Lee and B. Kang, Improved color attenuation prior for single-image haze removal, Applied Sciences, vol. 9, no. 19, p. 4011, Sep., 2019. DOI: 10.3390/app9194011.
8. B. Cai, and X. Xu, and K. Jia, and C. Qing, and D. Tao, DehazeNet: An end-toend system for single image haze removal, IEEE Transactions on Image Processing, vol. 25, no. 11, pp. 5187-5198, Nov., 2016. DOI: 10.1109/TIP.2016.2598681.
9. B. Li, and W. Ren, and D. Fu, and D. Tao, and D. Feng, and W. Zeng, and Z. Wang, Benchmarking single-image dehazing and beyond, IEEE Transactions on Image Processing, vol. 28, no. 1, pp. 492-505, Jan., 2019. DOI: 10.1109/TIP.2018.2867951.
10. D. Ngo and G. -D. Lee and B. Kang, Haziness degree evaluator: A knowledge-driven approach for haze density estimation, Sensors, vol. 21, no. 11, Jun., 2021. DOI: 10.3390/s21113896.
11. D. Ngo, and S. Lee, and G. -D. Lee, and B. Kang, Single-image visibility restoration: A machine learning approach and its 4K-capable hardware accelerator, Sensors, vol. 20, no. 20, p. 5795, Oct., 2020. DOI: 10.3390/s20205795.
12. J. -P. Tarel, and N. Hautière, Fast visibility restoration from a single color or gray level image, in 2009 IEEE 12th International Conference on Computer Vision, Kyoto, Japan, pp. 2201-2208, 2009. DOI: 10.1109/ICCV.2009.5459251.
13. D. Park, and H. Park, and D. K. Han, and H. Ko, Single Image dehazing with image entropy and information fidelity, in 2014 IEEE International Conference on Image Processing (ICIP), Paris, France, pp. 4037-4041, 2014. DOI: 10.1109/ICIP.2014.7025820.
14. H. Cho, and G. -J. Kim, and K. Jang, and S. Lee, and B. Kang, Color image enhancement based on adaptive nonlinear curves of luminance features, Journal of Semiconductor Technology and Science, vol. 15, no. 1, pp. 60-67, Feb., 2015. DOI: 10.5573/JSTS.2015.15.1.060.
15. Z. Wang, and A. C. Bovik, and H. R. Sheikh, and E. P. Simoncelli, Image quality assessment: from error visibility to structural similarity, IEEE Transactions on. Image Processing, vol. 13, no. 4, pp. 600-612, Apr., 2004. DOI: 10.1109/TIP.2003.819861.
16. L. Zhang, and L. Zhang, and X. Mou, and D. Zhang, FSIM: A feature similarity index for image quality assessment, IEEE Transactions on Image Processing, vol. 20, no. 8, pp. 2378-2386, Aug., 2011. DOI: 10.1109/TIP.2011.2109730.
17. H. Yeganeh, and W. Zhou, Objective quality assessment of tonemapped images, IEEE Transactions on Image Processing, vol. 22, no. 2, pp. 657-667, Feb., 2012. DOI: 10.1109/TIP.2012.2221725.
18. C. Ancuti, and C. O. Ancuti, and R. Timofte, and C. D. Vleeschouwer, I-HAZE: A dehazing benchmark with real hazy and haze-free indoor images, in Advanced Concepts for Intelligent Vision Systems, Poitiers, France, pp. 620-631, 2018. DOI: 10.1007/978-3-030-01449-0_52.
19. C. O. Ancuti, and C. Ancuti, and R. Timofte, and C. D. Vleeschouwer, O-HAZE: A dehazing benchmark with real hazy and haze-free outdoor images, in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Salt Lake City: UT, USA, pp. 867-875, 2018. DOI: 10.1109/CVPRW.2018.00119.
20. K. Ma and W. Liu and Z. Wang, Perceptual evaluation of single image dehazing algorithms, in 2015 IEEE International Conference on Image Processing (ICIP), Quebec City: QC, Canada, pp. 3600-3604, 2015. DOI: 10.1109/ICIP.2015.7351475.
21. K. Jack, Chapter 9: NTSC and PAL digital encoding and decoding, in Video Demystified, 4th ed, Elsevier India, pp. 394-471, 2004.

Seunmgin Lee

received his B.S. degree and M.S. degree in Electronics Engineering from Dong-A University, Busan, South Korea, in 2016, and 2018, respectively. He is currently pursuing a Ph.D in Electronics Engineering at the Dong-A University. His research interests include image processing and SoC architectures for real-time processing.

Bongsoon Kang

received his B.S. degree in Electronics Engineering from Yonsei University, Seoul, South Korea, in 1985, M.S. degree in Electrical Engineering from University of Pennsylvania, USA, in 1987, and Ph.D degree in Electrical and Computer Engineering from Drexel University, USA, in 1990. His research interests include image processing and SoC architectures for real-time processing.

### Article

Journal of information and communication convergence engineering 2022; 20(3): 212-218

Published online September 30, 2022 https://doi.org/10.56977/jicce.2022.20.3.212

## A 4K-Capable Hardware Accelerator of Haze Removal Algorithm using Haze-relevant Features

Seungmin Lee and Bongsoon Kang* , Member, KIICE

Department of Electronics Engineering, Dong-A University, Busan 49315, Korea

Correspondence to:*Bongsoon Kang (E-mail: bongsoon@dau.ac.kr, Tel: +82-51-200-7703)
Department of Electronics Engineering, Dong-A University, Busan 49315, Korea

Received: January 3, 2022; Revised: January 3, 2022; Accepted: August 17, 2022

This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/3.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

### Abstract

The performance of vision-based intelligent systems, such as self-driving cars and unmanned aerial vehicles, is subject to weather conditions, notably the frequently encountered haze or fog. As a result, studies on haze removal have garnered increasing interest from academia and industry. This paper hereby presents a 4K-capable hardware implementation of an efficient haze removal algorithm with the following two improvements. First, the depth-dependent haze distribution is predicted using a linear model of four haze-relevant features, where the model parameters are obtained through maximum likelihood estimates. Second, the approximated quad-decomposition method is adopted to estimate the atmospheric light. Extensive experimental results then follow to verify the efficacy of the proposed algorithm against well-known benchmark methods. For real-time processing, this paper also presents a pipelined architecture comprised of customized macros, such as split multipliers, parallel dividers, and serial dividers. The implementation results demonstrated that the proposed hardware design can handle DCI 4K videos at 30.8 frames per second.

Keywords: Field-programmable gate array, Hardware accelerator, Haze removal, Real-time processing

### I. INTRODUCTION

The industrial structure has been changing dramatically due to the Fourth Industrial Revolution (or Industry 4.0), which dominates the mass surveillance and autonomous driving industries. Vision-based intelligent systems, such as self-driving cars and unmanned aerial vehicles, are being rapidly developed. These life-critical systems adopt highlevel object recognition algorithms to sense their environment and operate without human involvement. However, as the performance of these algorithms is subject to weather conditions, poor visibility resulting from adverse weather can trigger a cascading failure that may lead to unfortunate consequences. Therefore, studies on visibility restoration are essential for autonomous vehicles. In this research direction, haze removal (or, equivalently, image dehazing) has garnered growing interest from researchers because haze is seemingly the most frequently encountered weather in practice. In this context, haze refers to the suspended aerosols in the atmosphere. The particle-particle collision of these aerosols and light photons causes the atmospheric scattering phenomenon, reducing the visibility of captured scenes and rendering haze removal research relevant to visibility restoration.

Haze removal algorithms are generally based on the simplified Koschmieder model [1], which describes hazy image formation as follows:

$Ix=Jxtx+A1−tx,$

where I represents the input image, J the scene radiance, t the transmission map, A the atmospheric light, and x the pixel coordinates. Assuming that H and W are the image height and width, respectively, I, J, and A take on values in $ℝH×W×3$, whereas $t∈ℝH×W$. According to (1), recovering J is an ill-posed problem because I is the only observation. Thus, early attempts in haze removal solved this problem by using multiple input images. However, as it is burdensome to acquire such input data, researchers have shifted their interest to single-image haze removal.

According to a recent systematic review [2], this haze removal category can be further partitioned into three subcategories: image processing, machine learning, and deep learning. Concerning the first, the dark channel prior (DCP) proposed by He et al. [3] is typical. The DCP states that outdoor non-sky images exhibit an extremely dark channel, whose intensity approximates zero in local patches around all pixels. They then adopted computationally intensive soft matting to refine the transmission estimate. This method demonstrated good performance in general, but it substantially prolonged the execution time due to the inherent problem in soft matting. Also, it is subject to color distortion when the input image contains a broad sky or shady objects. These limitations brought a large room for improvements, and many follow-up studies have been proposed. For example, Kim et al. [4] reduced the computational complexity by using the modified hybrid median filter—equipped with excellent edge-preserving characteristics—to eliminate the refinement step. This elimination then favored a fast and efficient hardware implementation [4,5].

In the second subcategory, a typical work is the color attenuation prior (CAP) proposed by Zhu et al. [6]. The CAP was also discovered through extensive observations on outdoor images. It states that the scene depth is closely correlated with the difference between the saturation and the value. Zhu et al. [6] modeled this correlation using a linear model, whose parameters were estimated utilizing the maximum likelihood estimates (MLE). The CAP provides a fast and effective haze removal, albeit with color distortion and background noise. In a follow-up study, Ngo et al. [7] addressed these two problems using adaptive weighting and low-pass filtering.

Finally, deep-learning techniques, such as convolutional neural networks (CNNs) and generative adversarial networks (GANs), have also found their applications in haze removal. The pioneering work of Cai et al. [8] can be taken as a prime example. They proposed a well-performed three-layer CNN for estimating the transmission map from a single input image. In subsequent work, Li et al. [9] employed serial multiscale mapping to design a CNN that estimates and refines the transmission map from coarse to fine scales. Although deep-learning-based haze removal methods generally deliver satisfactory performance, they are subject to the domain-shift problem.

This paper presents a machine-learning-based method that improves the CAP by considering two new haze-relevant features in addition to the saturation and value. More precisely, we estimate the scene depth as a linear combination of local entropy, dark channel, saturation, and value. We then present a comparative evaluation with other state-of-the-art benchmark methods to verify the efficacy of the proposed haze removal algorithm. Furthermore, we demonstrate that the software implementation per se cannot satisfy real-time processing requirements. Consequently, we design a 4K-capable hardware accelerator that can handle 4K videos at 30.8 frames per second (fps).

The rest of this paper is structured as follows. Section 2 explores the haze-relevant features and describes the proposed algorithm in detail. Section 3 presents the comparative evaluation with benchmark algorithms, and Section 4 demonstrates the necessity of a hardware accelerator for real-time processing. After that, Section 5 provides a detailed description of the proposed hardware design and interprets the implementation results. Finally, Section 6 concludes the paper.

### A. Haze-relevant Features

Under the single image dehazing approach, most algorithms estimate the transmission map in two major steps: feature extraction and regression. On the one hand, these two are easily noticeable in image-processing and machine-learning- based methods. For example, He et al. [3] calculated the normalized dark channel (feature extraction) and subtracted it from unity (regression) to estimate the transmission map. On the other hand, deep learning-based methods usually introduce the multiscale mapping between these two steps to improve robustness against spatial variance in the input image. This observation demonstrates the fundamental importance of haze-relevant features in haze removal. Recently, Ngo et al. [10] explored and summarized the haze-relevant features hitherto reported in the literature. In addition, they also verified the correlation between those features and the haze distribution using representative hazy and haze-free image patches extracted from well-publicized datasets. Some of the verification results—corresponding to the saturation, value, dark channel, and local entropy— is illustrated in Fig. 1, where Figs. 1(c) and (d) are adopted from [10]. The normalized histograms demonstrate that feature values follow the normal distribution, where the means of the hazy and haze-free distributions are well separated. Also, based on the degree of overlap, it is observed that the dark channel exhibits the strongest correlation with haze distribution, followed by saturation, value, and local entropy.

Figure 1. Normalized histograms of four haze-relevant features: (a) saturation, (b) value, (c) dark channel, and (d) local entropy.

Inspired by the work of Zhu et al. [6], we also utilize a linear model to estimate the transmission map from the saturation, value, dark channel, and local entropy. The reason for using two additional features comes from the observing normalized histograms in Fig. 1. It is conspicuous that each feature correlates with the haze distribution in a different way. In addition, there are currently no features with a perfect correlation. Saturation, value, dark channel, and local entropy do not represent the haze distribution in particular circumstances. The breakdown of the dark channel in sky regions or shady objects is a prime example. Therefore, using multiple features allows the mutual compensation for their failures. The sky region is haze-free in the previous example, but its all-channel high intensities result in high dark channel values. Based on the dark channel, the sky region is misclassified as densely hazy instead of haze-free. However, as this region is also textureless, its haze condition can be recognized using the local entropy. So, this example demonstrates that the local entropy can compensate for the failure of the dark channel in the sky region.

### B. Scene Depth Estimation

As discussed earlier, we improved the work of Zhu et al. [6] to estimate the scene depth from the saturation, value, dark channel, and local entropy using a linear model. This model is illustrated in (2), where d denotes the scene depth, f1 saturation, f2 value, f3 dark channel, and f4 local entropy. The corresponding parameters are θ1, θ2, θ3, θ4, while θ0 represents the bias. The variable ε denotes the model error, and we assume that it follows the normal distribution with zero mean and σ2 variance. According to the characteristics of the normal distribution, the scene depth is also normally distributed with (θ0 + θ1f1 + θ2f2 + θ3f3 + θ4f4) mean and σ2 variance.

$dx=θ0+θ1f1+θ2f2+θ3f3+θ4f4+εx.$

Subsequently, we leverage the MLE technique to determine the parameters that maximize the likelihood function [11], wherein the synthetic training dataset is prepared as follows. We utilize the 500IMG dataset [11] whose 500 constituent haze-free images are collected from free image-sharing services. Then, we employ the enhanced equidistribution [11] to create the random depth maps, which serve as the ground-truth references in the training dataset. We also draw the random atmospheric light—whose values range from 0.8 to 1—from the enhanced equidistribution. Given the scene depth, we use the following (3) to calculate the transmission map.

$tx=exp−βscdx,$

where $βsc$ is normally one as the atmospheric scattering coefficient. Because the transmission map and atmospheric light are now available, we substitute these two into (1) to produce the hazy synthetic images, whose saturation, value, dark channel, and local entropy serve as the inputs in the training dataset.

We then apply the mini-batch gradient ascent algorithm [11] on the training dataset created above to estimate the parameters. The best estimates that we obtained are θ0 = −0.5570, θ1 = 1.5210, θ2 = 0.9042, θ3 = 0.7543, and θ4 = −0.3685. It is worth noting that this parameter estimation step is performed offline, so it does not affect the run-time of the proposed method.

### C. Atmospheric Light Estimation

Researchers usually adopted the atmospheric light estimation (ALE) method of He et al. [3], which locates the atmospheric light in the “most opaque” region. He et al. [3] defined those pixels whose dark channel value is within the top 0.1% of that region. Then, the pixel with the highest intensity in the red-green-blue color space was selected as the atmospheric light.

In a different approach, Tarel and Hautiere [12] assumed that the atmospheric light was pure white if the input image was correctly white-balanced. However, this ALE method and even that of He et al. [3] are prone to incorrect estimation when the input image contains bright objects, such as white cars or light bulbs. The quad-decomposition algorithm proposed by Park et al. [13] is a good alternative. The input image is now recursively partitioned into quarters based on the average luminance. This partition procedure can eliminate bright objects effectively because of their high contrast to the background. Nevertheless, as the partition requires many frame buffers, the quad-decomposition algorithm is inefficient in memory usage. Therefore, Ngo et al. [11] developed an approximated version that is free of frame buffers. So, in this study, we utilize the approximated quaddecomposition method to estimate atmospheric light.

After that, we substitute the estimates of transmission map and atmospheric light into (1) to recover the scene radiance. Finally, we adopt the adaptive tone remapping method of Cho et al. [14] to post-process the recovered image.

### III. Evaluation

This section compares the performance of the proposed method against four benchmark algorithms, including those proposed by Tarel and Hautiere [12], Zhu et al. [6], Kim et al. [4], and Ngo et al. [7]. Henceforth, we refer to these four as Tarel, Zhu, Kim, and Ngo, respectively. For comparison, we employ three full-reference metrics: structural similarity (SSIM) [15], feature similarity extended to color images (FSIMc) [16], and tone-mapped image quality index (TMQI) [17]. These metrics take on values ranging from zero to unity, wherein higher values signify a better performance. Also, we use two real datasets (I-HAZE [18] and OHAZE [19]) that comprise 30 and 45 pairs of hazy and haze-free images, respectively. Table 1 shows the average SSIM, FSIMc, and TMQI scores on I-HAZE and O-HAZE datasets, and the best results are displayed in bold. It can be observed that the proposed algorithm is the best performing under SSIM and FSIMc, regardless of whether input images are indoor or outdoor. Additionally, the performance gap between the proposed method and Zhu is easily noticeable, attributed to the use of two new haze-relevant features. The saturation, value, dark channel, and local entropy can compensate for one another, boosting performance when saturation and value fail to represent the haze distribution. So, in general, the proposed algorithm can be considered superior to the four benchmark algorithms. Fig. 2 shows hazy images and corresponding dehazing results obtained from the four benchmark methods and the proposed algorithm. The first row shows the dehazing results of a hazy image from the IVC dataset [20], which consists of 25 real hazy images. This dataset was excluded from the quantitative evaluation because it does not contain ground-truth references. In the second and third rows, haze removal was performed on images from the I-HAZE and O-HAZE datasets, respectively. It can be observed that Tarel exhibits excellent performance, but color distortion arises in the sky region. Meanwhile, the results of Zhu hinder object recognition due to excessive haze removal. In the results of Kim, the performance is average, and color distortion also arises in the upper part of the IVC and O-HAZE images. Conversely, results of Ngo are satisfactory without visually unpleasant distortion. However, in the IVC and I-HAZE images, the dehazing power is too strong, leading to the occurrence of black pixels, as witnessed in the dog’s fur and the bottom of the sofa. Finally, the proposed method removes haze effectively and well-preserves the dog’s fur color. In addition, in the I-HAZE and O-HAZE images, the dehazing results are more satisfactory than those of the benchmark methods.

Average structural similarity (SSIM), feature similarity extended to color images (FSIMc), and tone-mapped image quality index (TMQI) scores on I-HAZE and O-HAZE. The Best results are displayed in bold..

DatasetI-HAZEO-HAZE
MethodSSIMFSIMcTMQISSIMFSIMcTMQI
Tarel0.72000.80550.77400.72630.77330.8416
Zhu0.68640.82520.75120.66470.77380.8118
Kim0.64240.78790.70260.47020.68690.6509
Ngo0.76000.84820.78920.73220.82190.8935
Proposed0.76420.86580.78780.73290.89200.8351

Figure 2. Qualitative comparison with other haze removal methods on the IVC, I-HAZE, and O-HAZE datasets.

### IV. IMPORTANCE OF HARDWARE IMPLEMENTATION

For an image processing algorithm to be deployed in realworld systems, it should handle image data at a minimum rate of 25 fps or greater, depending on whether the color encoding standard is PAL or NTSC [21]. Therefore, we conducted a run-time comparison between several haze removal algorithms and tabulated the results in Table 2. The simulation environment is MATLAB R2019a , running on a host computer with Intel Core i9-9900K CPU, NVIDIA TITAN RTX GPU, and 64GB RAM. It can be observed from Table 3 that none of the algorithms can handle images in real-time. This finding suggests that hardware implementation is essential for coping well with the real-time processing requirement.

Run-time comparison of haze removal algorithms (in seconds) for three image sizes..

Size640 × 4801024 × 7684096 × 2160
Method
He12.6432.37470.21
Tarel0.280.769.02
Zhu0.220.556.39
Kim0.160.434.81
Ngo0.170.445.22
Proposed0.932.3226.95

Hardware implementation result of the proposed hardware design..

DeviceXc7z045-2ffg900
Slice Logic UtilizationAvailableUsedUtilization
Slice Register(#)437,20064,91814.85%
Slice LUT(#)218,60058,12626.59%
RAM36E1s5455810.64%
Minimum Period3.67 ns
Maximum Frequency272.48 MHz

* The EDA tool was supported by the IC Design Education Center (IDEC), Korea..

### V. HARDWARE IMPLEMENTATION FOR REAL-TIME PROCESSING

Fig. 3 presents the hardware architecture of the proposed method, which can be partitioned into memories, logic circuits, and arithmetic circuits. Two 1024 × 32-bit SPRAMs and three 256×8-bit SPRAMs are used for the atmospheric light estimation [11] and adaptive tone remapping [14]. Other memories are used as line memories for 5 × 5 filtering operations. So, it takes time seven image lines from input to output. In addition, logic circuits consist of 10 modules. The system controller in logic circuits is responsible for inputoutput operations of the image/video data. Saturation, value, dark channel, and local entropy are calculated in parallel in the 4-feature module. Furthermore, to improve the maximum frequency, we utilized split multipliers for large multiplications where operands’ word-length is greater than 16 bits.

Figure 3. Hardware architecture of the proposed haze removal algorithm.

Table 3 summarizes the hardware implementation result in terms of slice registers, LUTs, RAM36E1s, and maximum frequency. Slice registers and LUTs represent the logic areas, whereas RAM36E1s represents the memory area. The proposed design used 64,918 registers, 58,126 LUTs, and 58 RAM36E1s, respectively. The fastest attainable frequency was 272.48 MHz. This information can be then used to obtain the maximum processing speed (MPS):

$MPS=fmaxW+HB⋅H+VB,$

where fmax denotes the maximum frequency in Table 3; W and H denote the input image’s width and height, respectively; and HB and VB denote the horizontal and vertical blank periods. The hardware was implemented to minimize the number of blank periods corresponding to one pixel and one image line to increase the MPS. It demonstrates that the proposed design can process the DCI 4K video at 30.8 fps, satisfying the realtime processing requirement of 25 fps or greater.

Fig. 4 depicts the C/C++ platform and verification board for the real-world execution. The top and middle thirds of Fig. 4 belong to the platform, whereas the bottom third depicts the system-on-a-chip (SoC) board. Moreover, the upper part of the platform shows side-by-side input-output data for ease of performance verification. The platform control panel is responsible for providing input data to the SoC board.

Figure 4. Hardware verification using a system-on-a-chip evaluation board.

Meanwhile, the algorithm control provides a convenient graphical user interface for configuring the hardware design running on the board. This C/C++ platform is a convenient means for verifying the real-time processing of the proposed hardware design.

### VI. CONCLUSION

A high-performance haze removal algorithm and its corresponding 4K-capable hardware accelerator were presented in this paper. We proposed using two new haze-relevant features (dark channel and local entropy) to estimate the transmission map, based on the observation that they can effectively compensate for the failures of the CAP. In addition, we adopted a frame-buffer-free version of the quaddecomposition algorithm to estimate atmospheric light to reduce hardware resources. We then provided extensive experimental results to demonstrate the superiority of the proposed method over benchmark algorithms. We also conducted a run-time comparison to show that the software implementation per se was insufficient for real-time processing. Therefore, we presented a 4K-capable hardware design that can handle DCI 4K videos at 30.8 fps, rendering the proposed algorithm highly relevant for high quality, highspeed real-time systems, such as autonomous cars and drones.

### ACKNOWLEDGMENTS

This research was funded by research funds from Dong-A University, Busan, Korea.

### Fig 1.

Figure 1.Normalized histograms of four haze-relevant features: (a) saturation, (b) value, (c) dark channel, and (d) local entropy.
Journal of Information and Communication Convergence Engineering 2022; 20: 212-218https://doi.org/10.56977/jicce.2022.20.3.212

### Fig 2.

Figure 2.Qualitative comparison with other haze removal methods on the IVC, I-HAZE, and O-HAZE datasets.
Journal of Information and Communication Convergence Engineering 2022; 20: 212-218https://doi.org/10.56977/jicce.2022.20.3.212

### Fig 3.

Figure 3.Hardware architecture of the proposed haze removal algorithm.
Journal of Information and Communication Convergence Engineering 2022; 20: 212-218https://doi.org/10.56977/jicce.2022.20.3.212

### Fig 4.

Figure 4.Hardware verification using a system-on-a-chip evaluation board.
Journal of Information and Communication Convergence Engineering 2022; 20: 212-218https://doi.org/10.56977/jicce.2022.20.3.212

Average structural similarity (SSIM), feature similarity extended to color images (FSIMc), and tone-mapped image quality index (TMQI) scores on I-HAZE and O-HAZE. The Best results are displayed in bold..

DatasetI-HAZEO-HAZE
MethodSSIMFSIMcTMQISSIMFSIMcTMQI
Tarel0.72000.80550.77400.72630.77330.8416
Zhu0.68640.82520.75120.66470.77380.8118
Kim0.64240.78790.70260.47020.68690.6509
Ngo0.76000.84820.78920.73220.82190.8935
Proposed0.76420.86580.78780.73290.89200.8351

Run-time comparison of haze removal algorithms (in seconds) for three image sizes..

Size640 × 4801024 × 7684096 × 2160
Method
He12.6432.37470.21
Tarel0.280.769.02
Zhu0.220.556.39
Kim0.160.434.81
Ngo0.170.445.22
Proposed0.932.3226.95

Hardware implementation result of the proposed hardware design..

DeviceXc7z045-2ffg900
Slice Logic UtilizationAvailableUsedUtilization
Slice Register(#)437,20064,91814.85%
Slice LUT(#)218,60058,12626.59%
RAM36E1s5455810.64%
Minimum Period3.67 ns
Maximum Frequency272.48 MHz

* The EDA tool was supported by the IC Design Education Center (IDEC), Korea..

### References

1. Z. Lee, and S. Shang, Visibility: How applicable is the century-old koschmieder model?, Journal of the Atmospheric Sciences, vol. 73, no. 11, pp. 4573-4581, Nov., 2016. DOI: 10.1175/JAS-D-16-0102.1.
2. D. Ngo, and S. Lee, and T. M. Ngo, and G. -D. Lee, and B. Kang, Visibility restoration: A systematic review and meta-analysis, Sensors, vol. 21, no. 8, p. 2625, Apr., 2021. DOI: 10.3390/s21082625.
3. K. He and J. Sun and X. Tang, Single image haze removal using dark channel prior, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33, no. 12, pp. 2341-2353, Dec., 2011. Dec. 2011. DOI: 10.1109/TPAMI.2010.168.
4. G. -J. Kim and S. Lee and B. Kang, Single image haze removal using hazy particle maps, IEICE Transactions on Fundamentals of Electronics Communications and Computer Sciences, vol. E101-A, no. 11, pp. 1999-2002, Nov., 2018. DOI: 10.1587/transfun.E101.A.1999.
5. D. Ngo and G. -D. Lee and B. Kang, A 4K-capable FPGA implementation of single image haze removal using hazy particle maps, Applied Sciences, vol. 9, no. 17, p. 3443, Aug., 2019. DOI: 10.3390/app9173443.
6. Q. Zhu and J. Mai and L. Shao, A fast single image haze removal algorithm using color attenuation prior, IEEE Transactions on Image Processing, vol. 24, no. 11, pp. 3522-3533, Nov., 2015. DOI: 10.1109/TIP.2015.2446191.
7. D. Ngo and G. -D. Lee and B. Kang, Improved color attenuation prior for single-image haze removal, Applied Sciences, vol. 9, no. 19, p. 4011, Sep., 2019. DOI: 10.3390/app9194011.
8. B. Cai, and X. Xu, and K. Jia, and C. Qing, and D. Tao, DehazeNet: An end-toend system for single image haze removal, IEEE Transactions on Image Processing, vol. 25, no. 11, pp. 5187-5198, Nov., 2016. DOI: 10.1109/TIP.2016.2598681.
9. B. Li, and W. Ren, and D. Fu, and D. Tao, and D. Feng, and W. Zeng, and Z. Wang, Benchmarking single-image dehazing and beyond, IEEE Transactions on Image Processing, vol. 28, no. 1, pp. 492-505, Jan., 2019. DOI: 10.1109/TIP.2018.2867951.
10. D. Ngo and G. -D. Lee and B. Kang, Haziness degree evaluator: A knowledge-driven approach for haze density estimation, Sensors, vol. 21, no. 11, Jun., 2021. DOI: 10.3390/s21113896.
11. D. Ngo, and S. Lee, and G. -D. Lee, and B. Kang, Single-image visibility restoration: A machine learning approach and its 4K-capable hardware accelerator, Sensors, vol. 20, no. 20, p. 5795, Oct., 2020. DOI: 10.3390/s20205795.
12. J. -P. Tarel, and N. Hautière, Fast visibility restoration from a single color or gray level image, in 2009 IEEE 12th International Conference on Computer Vision, Kyoto, Japan, pp. 2201-2208, 2009. DOI: 10.1109/ICCV.2009.5459251.
13. D. Park, and H. Park, and D. K. Han, and H. Ko, Single Image dehazing with image entropy and information fidelity, in 2014 IEEE International Conference on Image Processing (ICIP), Paris, France, pp. 4037-4041, 2014. DOI: 10.1109/ICIP.2014.7025820.
14. H. Cho, and G. -J. Kim, and K. Jang, and S. Lee, and B. Kang, Color image enhancement based on adaptive nonlinear curves of luminance features, Journal of Semiconductor Technology and Science, vol. 15, no. 1, pp. 60-67, Feb., 2015. DOI: 10.5573/JSTS.2015.15.1.060.
15. Z. Wang, and A. C. Bovik, and H. R. Sheikh, and E. P. Simoncelli, Image quality assessment: from error visibility to structural similarity, IEEE Transactions on. Image Processing, vol. 13, no. 4, pp. 600-612, Apr., 2004. DOI: 10.1109/TIP.2003.819861.
16. L. Zhang, and L. Zhang, and X. Mou, and D. Zhang, FSIM: A feature similarity index for image quality assessment, IEEE Transactions on Image Processing, vol. 20, no. 8, pp. 2378-2386, Aug., 2011. DOI: 10.1109/TIP.2011.2109730.
17. H. Yeganeh, and W. Zhou, Objective quality assessment of tonemapped images, IEEE Transactions on Image Processing, vol. 22, no. 2, pp. 657-667, Feb., 2012. DOI: 10.1109/TIP.2012.2221725.
18. C. Ancuti, and C. O. Ancuti, and R. Timofte, and C. D. Vleeschouwer, I-HAZE: A dehazing benchmark with real hazy and haze-free indoor images, in Advanced Concepts for Intelligent Vision Systems, Poitiers, France, pp. 620-631, 2018. DOI: 10.1007/978-3-030-01449-0_52.
19. C. O. Ancuti, and C. Ancuti, and R. Timofte, and C. D. Vleeschouwer, O-HAZE: A dehazing benchmark with real hazy and haze-free outdoor images, in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Salt Lake City: UT, USA, pp. 867-875, 2018. DOI: 10.1109/CVPRW.2018.00119.
20. K. Ma and W. Liu and Z. Wang, Perceptual evaluation of single image dehazing algorithms, in 2015 IEEE International Conference on Image Processing (ICIP), Quebec City: QC, Canada, pp. 3600-3604, 2015. DOI: 10.1109/ICIP.2015.7351475.
21. K. Jack, Chapter 9: NTSC and PAL digital encoding and decoding, in Video Demystified, 4th ed, Elsevier India, pp. 394-471, 2004.
Sep 30, 2022 Vol.20 No.3, pp. 143~233