Experimental Study of HSV Threshold Method and U-Net Neural Net-work in Fire Recognition Task
Main Article Content
Abstract
A comparative analysis of image segmentation methods for fire detection was conducted using thresholding in the HSV color space and the U-Net neural network. The study aimed to evaluate the efficiency of these approaches in terms of execution time and fire detection accuracy based on RMSE, IoU, Dice, and MAPE metrics. Experiments were performed on four different fire images with manually prepared ground truth fire masks. The results showed that the HSV method offers high processing speed (0.0010–0.0020 s) but tends to detect not only fire but also smoke, reducing its accuracy (IoU 0.0863–0.3357, Dice 0.1588–0.5026). The U-Net neural network demonstrates higher fire segmentation accuracy (IoU up to 0.6015, Dice up to 0.7512) due to selective flame detection but requires significantly more time (1.2477–1.3733 s) and may underestimate the total fire area (MAPE up to 78.5840%). Visual assessment confirmed differences in methods' behavior: HSV captures smoke as part of the target area, while U-Net focuses exclusively on fire. The choice between methods depends on task priorities: speed or accuracy. Future research directions were proposed, including U-Net optimization and the development of hybrid approaches.
Keywords:
Article Details
References
Pratomo A.H., Kaswidjanti W., Nugroho A.S., Saifullah S. Parking detection system using background subtraction and hsv color segmentation // Bulletin of Electrical Engineering and Informatics. 2021. Vol. 10, No. 6. P. 3211–3219.
https://doi.org/10.11591/eei.v10i6.3251
Wang Y., Han Q., Li Y., Li Y. Video smoke detection based on multi-feature fusion and modified random forest // Engineering Letters. 2021. Vol. 29, No. 3. P. 1115–1122.
Kokovkina V.A., Antipov V.A. Adaptivnaya segmentatsiya simvolov na avtomobil'nykh nomerakh // DSPA: Voprosy primeneniya tsifrovoy obrabotki signalov. 2016. T. 6, № 3. S. 663–666.
Li Y., Ge M., Zhang S., Wang K. Adaptive Segmentation Algorithm for Subtle Defect Images on the Surface of Magnetic Ring Using 2D-Gabor Filter Bank // Sensors. 2024. Vol. 24, No. 3. https://doi.org/10.3390/s24031031
Zhuykova Ye.G. Sravnitel'nyy analiz adaptivnogo metoda k-srednikh i porogovoy klasterizatsii // Perspektivy nauki. 2024. Т. 6(177). S. 92–98.
Rezki M., Nurdiani S., Safitri R.A., Ihsan M.I.R. Iqbal M. Segmentasi Api dan Asap Pada Kebakaran Dengan Metode K-Means Clustering // Computer Science (CO-SCIENCE). 2022. Vol. 2, No. 1. P. 26–32. https://doi.org/10.31294/coscience.v2i1.849
Zimichev Ye.A., Kazanskiy N.L., Serafimovich P.G. Prostranstvennaya klassifikatsiya giperspektral'nykh izobrazheniy s ispol'zovaniyem metoda kla-sterizatsii k-means++// Komp'yuternaya optika. 2014. Т. 38, № 2. S. 281–286. https://doi.org/10.18287/0134-2452-2014-38-2-281-286
Pereyra M., McLaughlin S. Fast Unsupervised Bayesian Image Segmentation with Adaptive Spatial Regularisation // IEEE Transactions on Image Processing. 2017. Vol. 26, No. 6. P. 2577–2587. https://doi.org/10.1109/TIP.2017.2675165
Singh K.R., Neethu K.P., Madhurekaa K., Harita A., Mohan P. Parallel SVM model for forest fire prediction // Soft Computing Letters. 2021. Vol. 3. 100014.
https://doi.org/10.1016/j.socl.2021.100014
Xiong D., Yan L. Early smoke detection of forest fires based on SVM image segmentation //Journal of Forest Science. 2019. Vol. 65, No. 4. P. 150–159. https://doi.org/10.17221/82/2018-JFS
Demin N.S., Il'yasova N.Yu., Paringer R.A., Kirsh D.V. Primeneniye iskusstvennogo intellekta v oftal'mologii na primere resheniya zadachi se-manticheskoy segmentatsii izobrazheniya glaznogo dna // Komp'yuternaya optika. 2023. Т. 47, № 5. S. 824–831. https://doi.org/10.18287/2412-6179-CO-1283
Gavrilov D.A. Issledovaniye primenimosti svertochnoy neyronnoy seti U-Net k zadache segmentatsii izobrazheniy aviatsionnoy tekhniki // Komp'yuternaya optika. 2021. Т. 45, № 4. S. 575–579. https://doi.org/10.18287/2412-6179-CO-804
Mseddi W.S., Ghali R., Jmal M., Attia R. Fire Detection and Segmentation using YOLOv5 and U-NET // In European Signal Processing Conference (Vol. 2021-August, pp. 741–745). European Signal Processing Conference, EUSIPCO. 2021. https://doi.org/10.23919/EUSIPCO54536.2021.9616026
Bochkov V.S., Katayeva L.Yu., Maslennikov D.A., Kasparov I.V. Primeneniye arkhitektury glubokogo obucheniya U-Net dlya resheniya zadachi vydeleniya vysokotemperaturnykh zon pozhara na video // Trudy NGTU im. R.Ye. Alekseyeva. 2019. Т. 3 (126). S. 9–16. https://doi.org/10.46960/1816-210X_2019_3_9
Bochkov V.S., Katayeva L.Yu., Maslennikov D.A. Tochnaya mnogoklasso-vaya segmentatsiya pozharov: podkhody, neyronnyye seti, skhemy segmentatsii // Iskusstvennyy intellekt i prinyatiye resheniy. 2024. Т. 3. S. 71–86. https://doi.org/10.14357/20718594240306
Barmpoutis P., Stathaki T., Dimitropoulos K., Grammalidis N. Early fire detection based on aerial 360-degree sensors, deep convolution neural networks and exploitation of fire dynamic textures // Remote Sensing, 2020. Vol. 12 (19). P. 1–17. https://doi.org/10.3390/rs12193177
Panina V.S., Amelichev G.E. Primeneniye svertochnykh neyronnykh setey Mask R-CNN v intellektual'nykh parkovochnykh sistemakh // E-Scio. 2022. Т. 6(69). S. 425–432.
Begum S.R., S Y.D., M S.V.M. Mask R-CNN for fire detection // International Research Journal of Computer Science. 2021. Vol. 8, No. 7. P. 145–151. https://doi.org/10.26562/irjcs.2021.v0807.003
Zhou Y.C., Hu Z.Z., Yan K.X., Lin J.R. Deep Learning-Based Instance Segmentation for Indoor Fire Load Recognition // IEEE Access. 2021. Vol. 9. P. 148771–148782. https://doi.org/10.1109/ACCESS.2021.3124831
Tsalera E., Papadakis A., Voyiatzis I., Samarakou M. CNN-based, contextualized, real-time fire detection in computational resource-constrained environments // Energy Reports. 2023. Vol. 9. P. 247–257. https://doi.org/10.1016/j.egyr.2023.05.260
Nguyen T.H., Nguyen T.N., Ngo B.V. A VGG-19 Model with Transfer Learning and Image Segmentation for Classification of Tomato Leaf Disease // AgriEngineering. 2022. Vol. 4, No. 4. P. 871–887. https://doi.org/10.3390/agriengineering4040056
Almeida J.S., Huang C., Nogueira F.G., Bhatia S., De Albuquerque V.H.C. EdgeFireSmoke: A Novel Lightweight CNN Model for Real-Time Video Fire-Smoke Detection // IEEE Transactions on Industrial Informatics. 2022. Vol. 18, No. 11. P. 7889–7898. https://doi.org/10.1109/TII.2021.3138752
Jeong S.W., Yoo J. I-firenet: A lightweight CNN to increase generalization performance for real-time detection of forest fire in edge AI environments // Journal of Institute of Control, Robotics and Systems. 2020. Vol. 26, No. 9. P. 802–810. https://doi.org/10.5302/J.ICROS.2020.20.0033
Nadeem M., Dilshad N., Alghamdi N.S., Dang L.M., Song H.K., Nam J., Moon H. Visual Intelligence in Smart Cities: A Lightweight Deep Learning Model for Fire Detection in an IoT Environment // Smart Cities. 2023. Vol. 6, No. 5. P. 2245–2259. https://doi.org/10.3390/smartcities6050103
Ryu J., Kwak D. Flame detection using appearance‐based pre‐processing and convolutional neural network // Applied Sciences (Switzerland). 2021. Vol. 11, No. 11. https://doi.org/10.3390/app11115138
Roh J.-H., Min S.-H., Kong M. Flame Segmentation Characteristics of YCbCr Color Model Using Object Detection Technique // Fire Science and Engineering. 2023. Vol. 37, No. 6. P. 54–61. https://doi.org/10.7731/kifse.7c1d5c35
Wang X., Li M., Gao M., Liu Q., Li Z., Kou L. Early smoke and flame detection based on transformer // Journal of Safety Science and Resilience. 2023. Vol. 4, No. 3. P. 294–304. https://doi.org/10.1016/j.jnlssr.2023.06.002
Bobyr M., Arkhipov A., Emelyanov S., Milostnaya N. A method for creating a depth map based on a three-level fuzzy model // Engineering Applications of Artificial Intelligence. 2023. Vol. 117. https://doi.org/10.1016/j.engappai.2022.105629

This work is licensed under a Creative Commons Attribution 4.0 International License.
Presenting an article for publication in the Russian Digital Libraries Journal (RDLJ), the authors automatically give consent to grant a limited license to use the materials of the Kazan (Volga) Federal University (KFU) (of course, only if the article is accepted for publication). This means that KFU has the right to publish an article in the next issue of the journal (on the website or in printed form), as well as to reprint this article in the archives of RDLJ CDs or to include in a particular information system or database, produced by KFU.
All copyrighted materials are placed in RDLJ with the consent of the authors. In the event that any of the authors have objected to its publication of materials on this site, the material can be removed, subject to notification to the Editor in writing.
Documents published in RDLJ are protected by copyright and all rights are reserved by the authors. Authors independently monitor compliance with their rights to reproduce or translate their papers published in the journal. If the material is published in RDLJ, reprinted with permission by another publisher or translated into another language, a reference to the original publication.
By submitting an article for publication in RDLJ, authors should take into account that the publication on the Internet, on the one hand, provide unique opportunities for access to their content, but on the other hand, are a new form of information exchange in the global information society where authors and publishers is not always provided with protection against unauthorized copying or other use of materials protected by copyright.
RDLJ is copyrighted. When using materials from the log must indicate the URL: index.phtml page = elbib / rus / journal?. Any change, addition or editing of the author's text are not allowed. Copying individual fragments of articles from the journal is allowed for distribute, remix, adapt, and build upon article, even commercially, as long as they credit that article for the original creation.
Request for the right to reproduce or use any of the materials published in RDLJ should be addressed to the Editor-in-Chief A.M. Elizarov at the following address: amelizarov@gmail.com.
The publishers of RDLJ is not responsible for the view, set out in the published opinion articles.
We suggest the authors of articles downloaded from this page, sign it and send it to the journal publisher's address by e-mail scan copyright agreements on the transfer of non-exclusive rights to use the work.